2025-09-19 00:00:10.434036 | Job console starting 2025-09-19 00:00:10.454240 | Updating git repos 2025-09-19 00:00:10.605266 | Cloning repos into workspace 2025-09-19 00:00:10.805699 | Restoring repo states 2025-09-19 00:00:10.833802 | Merging changes 2025-09-19 00:00:10.833818 | Checking out repos 2025-09-19 00:00:11.044429 | Preparing playbooks 2025-09-19 00:00:11.880568 | Running Ansible setup 2025-09-19 00:00:17.509231 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-19 00:00:19.160357 | 2025-09-19 00:00:19.160484 | PLAY [Base pre] 2025-09-19 00:00:19.177587 | 2025-09-19 00:00:19.177733 | TASK [Setup log path fact] 2025-09-19 00:00:19.197112 | orchestrator | ok 2025-09-19 00:00:19.240320 | 2025-09-19 00:00:19.240461 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 00:00:19.281603 | orchestrator | ok 2025-09-19 00:00:19.314105 | 2025-09-19 00:00:19.314241 | TASK [emit-job-header : Print job information] 2025-09-19 00:00:19.405852 | # Job Information 2025-09-19 00:00:19.406116 | Ansible Version: 2.16.14 2025-09-19 00:00:19.406157 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-19 00:00:19.406198 | Pipeline: periodic-midnight 2025-09-19 00:00:19.406225 | Executor: 521e9411259a 2025-09-19 00:00:19.406245 | Triggered by: https://github.com/osism/testbed 2025-09-19 00:00:19.406267 | Event ID: d277075af18c49b7ba39913567d664d6 2025-09-19 00:00:19.424878 | 2025-09-19 00:00:19.425002 | LOOP [emit-job-header : Print node information] 2025-09-19 00:00:19.767697 | orchestrator | ok: 2025-09-19 00:00:19.767971 | orchestrator | # Node Information 2025-09-19 00:00:19.768013 | orchestrator | Inventory Hostname: orchestrator 2025-09-19 00:00:19.768038 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-19 00:00:19.768060 | orchestrator | Username: zuul-testbed02 2025-09-19 00:00:19.768081 | orchestrator | Distro: Debian 12.12 2025-09-19 00:00:19.768104 | orchestrator | Provider: static-testbed 2025-09-19 00:00:19.768125 | orchestrator | Region: 2025-09-19 00:00:19.768146 | orchestrator | Label: testbed-orchestrator 2025-09-19 00:00:19.768165 | orchestrator | Product Name: OpenStack Nova 2025-09-19 00:00:19.768184 | orchestrator | Interface IP: 81.163.193.140 2025-09-19 00:00:19.796649 | 2025-09-19 00:00:19.796765 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-19 00:00:21.439992 | orchestrator -> localhost | changed 2025-09-19 00:00:21.447667 | 2025-09-19 00:00:21.447775 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-19 00:00:23.444348 | orchestrator -> localhost | changed 2025-09-19 00:00:23.456511 | 2025-09-19 00:00:23.456606 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-19 00:00:24.050771 | orchestrator -> localhost | ok 2025-09-19 00:00:24.056614 | 2025-09-19 00:00:24.056719 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-19 00:00:24.114413 | orchestrator | ok 2025-09-19 00:00:24.138591 | orchestrator | included: /var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-19 00:00:24.154355 | 2025-09-19 00:00:24.154446 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-19 00:00:26.182608 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-19 00:00:26.183602 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/work/55676f51bab14a6e86aaaf487e9417c0_id_rsa 2025-09-19 00:00:26.183689 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/work/55676f51bab14a6e86aaaf487e9417c0_id_rsa.pub 2025-09-19 00:00:26.183715 | orchestrator -> localhost | The key fingerprint is: 2025-09-19 00:00:26.183737 | orchestrator -> localhost | SHA256:9vEm3otfDS6vpfsym9yl3WiEcpWZrt27qFJ09nPkzZg zuul-build-sshkey 2025-09-19 00:00:26.183756 | orchestrator -> localhost | The key's randomart image is: 2025-09-19 00:00:26.183784 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-19 00:00:26.183802 | orchestrator -> localhost | | | 2025-09-19 00:00:26.183820 | orchestrator -> localhost | | | 2025-09-19 00:00:26.183837 | orchestrator -> localhost | | + | 2025-09-19 00:00:26.183854 | orchestrator -> localhost | | . o= .| 2025-09-19 00:00:26.183871 | orchestrator -> localhost | | S o o+o*.| 2025-09-19 00:00:26.183890 | orchestrator -> localhost | | . ..+o.E==| 2025-09-19 00:00:26.183908 | orchestrator -> localhost | | +o+++.=| 2025-09-19 00:00:26.183925 | orchestrator -> localhost | | o *+O+=o| 2025-09-19 00:00:26.183942 | orchestrator -> localhost | | +o%@*o=| 2025-09-19 00:00:26.183959 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-19 00:00:26.184009 | orchestrator -> localhost | ok: Runtime: 0:00:00.959135 2025-09-19 00:00:26.190002 | 2025-09-19 00:00:26.190081 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-19 00:00:26.217798 | orchestrator | ok 2025-09-19 00:00:26.251418 | orchestrator | included: /var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-19 00:00:26.282910 | 2025-09-19 00:00:26.283007 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-19 00:00:26.316828 | orchestrator | skipping: Conditional result was False 2025-09-19 00:00:26.323540 | 2025-09-19 00:00:26.323627 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-19 00:00:27.172298 | orchestrator | changed 2025-09-19 00:00:27.177373 | 2025-09-19 00:00:27.177448 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-19 00:00:27.466387 | orchestrator | ok 2025-09-19 00:00:27.479923 | 2025-09-19 00:00:27.480016 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-19 00:00:28.068761 | orchestrator | ok 2025-09-19 00:00:28.078464 | 2025-09-19 00:00:28.078553 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-19 00:00:28.579237 | orchestrator | ok 2025-09-19 00:00:28.597778 | 2025-09-19 00:00:28.597885 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-19 00:00:28.636018 | orchestrator | skipping: Conditional result was False 2025-09-19 00:00:28.644021 | 2025-09-19 00:00:28.644134 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-19 00:00:29.897534 | orchestrator -> localhost | changed 2025-09-19 00:00:29.914695 | 2025-09-19 00:00:29.914804 | TASK [add-build-sshkey : Add back temp key] 2025-09-19 00:00:30.385784 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/work/55676f51bab14a6e86aaaf487e9417c0_id_rsa (zuul-build-sshkey) 2025-09-19 00:00:30.385978 | orchestrator -> localhost | ok: Runtime: 0:00:00.021362 2025-09-19 00:00:30.393332 | 2025-09-19 00:00:30.393420 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-19 00:00:31.086808 | orchestrator | ok 2025-09-19 00:00:31.118866 | 2025-09-19 00:00:31.118987 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-19 00:00:31.165866 | orchestrator | skipping: Conditional result was False 2025-09-19 00:00:31.256975 | 2025-09-19 00:00:31.257072 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-19 00:00:31.815449 | orchestrator | ok 2025-09-19 00:00:31.832760 | 2025-09-19 00:00:31.832863 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-19 00:00:31.870349 | orchestrator | ok 2025-09-19 00:00:31.877093 | 2025-09-19 00:00:31.877181 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-19 00:00:32.620200 | orchestrator -> localhost | ok 2025-09-19 00:00:32.626137 | 2025-09-19 00:00:32.626227 | TASK [validate-host : Collect information about the host] 2025-09-19 00:00:33.995674 | orchestrator | ok 2025-09-19 00:00:34.026607 | 2025-09-19 00:00:34.026718 | TASK [validate-host : Sanitize hostname] 2025-09-19 00:00:34.155186 | orchestrator | ok 2025-09-19 00:00:34.159476 | 2025-09-19 00:00:34.159552 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-19 00:00:35.267939 | orchestrator -> localhost | changed 2025-09-19 00:00:35.274492 | 2025-09-19 00:00:35.274585 | TASK [validate-host : Collect information about zuul worker] 2025-09-19 00:00:35.699552 | orchestrator | ok 2025-09-19 00:00:35.704478 | 2025-09-19 00:00:35.704561 | TASK [validate-host : Write out all zuul information for each host] 2025-09-19 00:00:36.554677 | orchestrator -> localhost | changed 2025-09-19 00:00:36.563146 | 2025-09-19 00:00:36.563228 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-19 00:00:36.865410 | orchestrator | ok 2025-09-19 00:00:36.870216 | 2025-09-19 00:00:36.870295 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-19 00:01:12.330987 | orchestrator | changed: 2025-09-19 00:01:12.331230 | orchestrator | .d..t...... src/ 2025-09-19 00:01:12.331266 | orchestrator | .d..t...... src/github.com/ 2025-09-19 00:01:12.331292 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-19 00:01:12.331314 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-19 00:01:12.331336 | orchestrator | RedHat.yml 2025-09-19 00:01:12.362579 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-19 00:01:12.362598 | orchestrator | RedHat.yml 2025-09-19 00:01:12.362671 | orchestrator | = 2.2.0"... 2025-09-19 00:01:26.660243 | orchestrator | 00:01:26.660 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-19 00:01:26.687439 | orchestrator | 00:01:26.687 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-19 00:01:26.893711 | orchestrator | 00:01:26.893 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-19 00:01:27.365871 | orchestrator | 00:01:27.365 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 00:01:27.754644 | orchestrator | 00:01:27.754 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-19 00:01:28.561103 | orchestrator | 00:01:28.560 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-19 00:01:29.000419 | orchestrator | 00:01:29.000 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-19 00:01:29.658771 | orchestrator | 00:01:29.658 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 00:01:29.658822 | orchestrator | 00:01:29.658 STDOUT terraform: Providers are signed by their developers. 2025-09-19 00:01:29.658828 | orchestrator | 00:01:29.658 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-19 00:01:29.658851 | orchestrator | 00:01:29.658 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-19 00:01:29.658901 | orchestrator | 00:01:29.658 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-19 00:01:29.658988 | orchestrator | 00:01:29.658 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-19 00:01:29.659055 | orchestrator | 00:01:29.658 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-19 00:01:29.659079 | orchestrator | 00:01:29.659 STDOUT terraform: you run "tofu init" in the future. 2025-09-19 00:01:29.659208 | orchestrator | 00:01:29.659 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-19 00:01:29.659316 | orchestrator | 00:01:29.659 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-19 00:01:29.659323 | orchestrator | 00:01:29.659 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-19 00:01:29.659330 | orchestrator | 00:01:29.659 STDOUT terraform: should now work. 2025-09-19 00:01:29.659381 | orchestrator | 00:01:29.659 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-19 00:01:29.662524 | orchestrator | 00:01:29.659 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-19 00:01:29.662551 | orchestrator | 00:01:29.659 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-19 00:01:29.752068 | orchestrator | 00:01:29.751 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-19 00:01:29.752164 | orchestrator | 00:01:29.751 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-19 00:01:29.929659 | orchestrator | 00:01:29.929 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-19 00:01:29.929718 | orchestrator | 00:01:29.929 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-19 00:01:29.929727 | orchestrator | 00:01:29.929 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-19 00:01:29.929732 | orchestrator | 00:01:29.929 STDOUT terraform: for this configuration. 2025-09-19 00:01:30.121854 | orchestrator | 00:01:30.119 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-19 00:01:30.121939 | orchestrator | 00:01:30.119 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-19 00:01:30.247236 | orchestrator | 00:01:30.247 STDOUT terraform: ci.auto.tfvars 2025-09-19 00:01:30.696534 | orchestrator | 00:01:30.696 STDOUT terraform: default_custom.tf 2025-09-19 00:01:30.812431 | orchestrator | 00:01:30.812 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-19 00:01:31.870418 | orchestrator | 00:01:31.869 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-19 00:01:32.469111 | orchestrator | 00:01:32.468 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-19 00:01:32.734147 | orchestrator | 00:01:32.730 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-19 00:01:32.734230 | orchestrator | 00:01:32.730 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-19 00:01:32.734241 | orchestrator | 00:01:32.730 STDOUT terraform:  + create 2025-09-19 00:01:32.734250 | orchestrator | 00:01:32.730 STDOUT terraform:  <= read (data resources) 2025-09-19 00:01:32.734258 | orchestrator | 00:01:32.730 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-19 00:01:32.734265 | orchestrator | 00:01:32.730 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-19 00:01:32.734272 | orchestrator | 00:01:32.730 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 00:01:32.734279 | orchestrator | 00:01:32.730 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-19 00:01:32.734286 | orchestrator | 00:01:32.730 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 00:01:32.734292 | orchestrator | 00:01:32.730 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 00:01:32.734299 | orchestrator | 00:01:32.730 STDOUT terraform:  + file = (known after apply) 2025-09-19 00:01:32.734305 | orchestrator | 00:01:32.730 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.734312 | orchestrator | 00:01:32.730 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.734333 | orchestrator | 00:01:32.730 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 00:01:32.734340 | orchestrator | 00:01:32.730 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 00:01:32.734347 | orchestrator | 00:01:32.731 STDOUT terraform:  + most_recent = true 2025-09-19 00:01:32.734354 | orchestrator | 00:01:32.731 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.734360 | orchestrator | 00:01:32.731 STDOUT terraform:  + protected = (known after apply) 2025-09-19 00:01:32.734366 | orchestrator | 00:01:32.731 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.734373 | orchestrator | 00:01:32.731 STDOUT terraform:  + schema = (known after apply) 2025-09-19 00:01:32.734380 | orchestrator | 00:01:32.731 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 00:01:32.734386 | orchestrator | 00:01:32.731 STDOUT terraform:  + tags = (known after apply) 2025-09-19 00:01:32.734393 | orchestrator | 00:01:32.731 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 00:01:32.734399 | orchestrator | 00:01:32.731 STDOUT terraform:  } 2025-09-19 00:01:32.734409 | orchestrator | 00:01:32.731 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-19 00:01:32.734416 | orchestrator | 00:01:32.731 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 00:01:32.734423 | orchestrator | 00:01:32.731 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-19 00:01:32.734429 | orchestrator | 00:01:32.731 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 00:01:32.734436 | orchestrator | 00:01:32.731 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 00:01:32.734442 | orchestrator | 00:01:32.731 STDOUT terraform:  + file = (known after apply) 2025-09-19 00:01:32.734448 | orchestrator | 00:01:32.731 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.734455 | orchestrator | 00:01:32.731 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.734461 | orchestrator | 00:01:32.731 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 00:01:32.734468 | orchestrator | 00:01:32.731 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 00:01:32.734480 | orchestrator | 00:01:32.732 STDOUT terraform:  + most_recent = true 2025-09-19 00:01:32.734487 | orchestrator | 00:01:32.732 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.734493 | orchestrator | 00:01:32.732 STDOUT terraform:  + protected = (known after apply) 2025-09-19 00:01:32.734500 | orchestrator | 00:01:32.732 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.734521 | orchestrator | 00:01:32.732 STDOUT terraform:  + schema = (known after apply) 2025-09-19 00:01:32.734529 | orchestrator | 00:01:32.732 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 00:01:32.734535 | orchestrator | 00:01:32.732 STDOUT terraform:  + tags = (known after apply) 2025-09-19 00:01:32.734541 | orchestrator | 00:01:32.732 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 00:01:32.734548 | orchestrator | 00:01:32.732 STDOUT terraform:  } 2025-09-19 00:01:32.734554 | orchestrator | 00:01:32.732 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-19 00:01:32.734566 | orchestrator | 00:01:32.732 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-19 00:01:32.734573 | orchestrator | 00:01:32.732 STDOUT terraform:  + content = (known after apply) 2025-09-19 00:01:32.734580 | orchestrator | 00:01:32.732 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 00:01:32.734586 | orchestrator | 00:01:32.732 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 00:01:32.734593 | orchestrator | 00:01:32.732 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 00:01:32.734599 | orchestrator | 00:01:32.732 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 00:01:32.734606 | orchestrator | 00:01:32.732 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 00:01:32.734613 | orchestrator | 00:01:32.732 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 00:01:32.734620 | orchestrator | 00:01:32.732 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 00:01:32.734626 | orchestrator | 00:01:32.733 STDOUT terraform:  + file_permission = "0644" 2025-09-19 00:01:32.734633 | orchestrator | 00:01:32.733 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-19 00:01:32.734639 | orchestrator | 00:01:32.733 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.734646 | orchestrator | 00:01:32.733 STDOUT terraform:  } 2025-09-19 00:01:32.734652 | orchestrator | 00:01:32.733 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-19 00:01:32.734659 | orchestrator | 00:01:32.733 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-19 00:01:32.734665 | orchestrator | 00:01:32.733 STDOUT terraform:  + content = (known after apply) 2025-09-19 00:01:32.734672 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 00:01:32.734678 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 00:01:32.734685 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 00:01:32.734691 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 00:01:32.734698 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 00:01:32.734704 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 00:01:32.734711 | orchestrator | 00:01:32.733 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 00:01:32.734717 | orchestrator | 00:01:32.733 STDOUT terraform:  + file_permission = "0644" 2025-09-19 00:01:32.734724 | orchestrator | 00:01:32.733 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-19 00:01:32.734730 | orchestrator | 00:01:32.733 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.734737 | orchestrator | 00:01:32.733 STDOUT terraform:  } 2025-09-19 00:01:32.734747 | orchestrator | 00:01:32.733 STDOUT terraform:  # local_file.inventory will be created 2025-09-19 00:01:32.734794 | orchestrator | 00:01:32.733 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-19 00:01:32.734801 | orchestrator | 00:01:32.733 STDOUT terraform:  + content = (known after apply) 2025-09-19 00:01:32.734813 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 00:01:32.734874 | orchestrator | 00:01:32.733 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 00:01:32.734961 | orchestrator | 00:01:32.734 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 00:01:32.735049 | orchestrator | 00:01:32.734 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 00:01:32.735127 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 00:01:32.735201 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 00:01:32.735254 | orchestrator | 00:01:32.735 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 00:01:32.735307 | orchestrator | 00:01:32.735 STDOUT terraform:  + file_permission = "0644" 2025-09-19 00:01:32.735376 | orchestrator | 00:01:32.735 STDOUT terraform:  + filename = "inventory.ci" 2025-09-19 00:01:32.735448 | orchestrator | 00:01:32.735 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.735484 | orchestrator | 00:01:32.735 STDOUT terraform:  } 2025-09-19 00:01:32.735545 | orchestrator | 00:01:32.735 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-19 00:01:32.735608 | orchestrator | 00:01:32.735 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-19 00:01:32.735674 | orchestrator | 00:01:32.735 STDOUT terraform:  + content = (sensitive value) 2025-09-19 00:01:32.735746 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 00:01:32.735833 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 00:01:32.735904 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 00:01:32.735974 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 00:01:32.736043 | orchestrator | 00:01:32.735 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 00:01:32.736111 | orchestrator | 00:01:32.736 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 00:01:32.736163 | orchestrator | 00:01:32.736 STDOUT terraform:  + directory_permission = "0700" 2025-09-19 00:01:32.736215 | orchestrator | 00:01:32.736 STDOUT terraform:  + file_permission = "0600" 2025-09-19 00:01:32.736275 | orchestrator | 00:01:32.736 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-19 00:01:32.736347 | orchestrator | 00:01:32.736 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.736382 | orchestrator | 00:01:32.736 STDOUT terraform:  } 2025-09-19 00:01:32.736444 | orchestrator | 00:01:32.736 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-19 00:01:32.736503 | orchestrator | 00:01:32.736 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-19 00:01:32.736554 | orchestrator | 00:01:32.736 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.736589 | orchestrator | 00:01:32.736 STDOUT terraform:  } 2025-09-19 00:01:32.736701 | orchestrator | 00:01:32.736 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-19 00:01:32.736817 | orchestrator | 00:01:32.736 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-19 00:01:32.736890 | orchestrator | 00:01:32.736 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.736942 | orchestrator | 00:01:32.736 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.737013 | orchestrator | 00:01:32.736 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.737081 | orchestrator | 00:01:32.737 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.737149 | orchestrator | 00:01:32.737 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.737235 | orchestrator | 00:01:32.737 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-19 00:01:32.737308 | orchestrator | 00:01:32.737 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.737355 | orchestrator | 00:01:32.737 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.737405 | orchestrator | 00:01:32.737 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.737455 | orchestrator | 00:01:32.737 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.737488 | orchestrator | 00:01:32.737 STDOUT terraform:  } 2025-09-19 00:01:32.737575 | orchestrator | 00:01:32.737 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-19 00:01:32.737659 | orchestrator | 00:01:32.737 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 00:01:32.737730 | orchestrator | 00:01:32.737 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.737816 | orchestrator | 00:01:32.737 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.737890 | orchestrator | 00:01:32.737 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.737960 | orchestrator | 00:01:32.737 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.738046 | orchestrator | 00:01:32.737 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.738131 | orchestrator | 00:01:32.738 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-19 00:01:32.738199 | orchestrator | 00:01:32.738 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.738246 | orchestrator | 00:01:32.738 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.738296 | orchestrator | 00:01:32.738 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.738340 | orchestrator | 00:01:32.738 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.738369 | orchestrator | 00:01:32.738 STDOUT terraform:  } 2025-09-19 00:01:32.738444 | orchestrator | 00:01:32.738 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-19 00:01:32.738519 | orchestrator | 00:01:32.738 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 00:01:32.738578 | orchestrator | 00:01:32.738 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.738628 | orchestrator | 00:01:32.738 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.738690 | orchestrator | 00:01:32.738 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.738760 | orchestrator | 00:01:32.738 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.738822 | orchestrator | 00:01:32.738 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.738939 | orchestrator | 00:01:32.738 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-19 00:01:32.739004 | orchestrator | 00:01:32.738 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.739057 | orchestrator | 00:01:32.739 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.739132 | orchestrator | 00:01:32.739 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.739205 | orchestrator | 00:01:32.739 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.739238 | orchestrator | 00:01:32.739 STDOUT terraform:  } 2025-09-19 00:01:32.739319 | orchestrator | 00:01:32.739 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-19 00:01:32.739394 | orchestrator | 00:01:32.739 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 00:01:32.739455 | orchestrator | 00:01:32.739 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.739504 | orchestrator | 00:01:32.739 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.739565 | orchestrator | 00:01:32.739 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.739626 | orchestrator | 00:01:32.739 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.739685 | orchestrator | 00:01:32.739 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.739840 | orchestrator | 00:01:32.739 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-19 00:01:32.739905 | orchestrator | 00:01:32.739 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.739946 | orchestrator | 00:01:32.739 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.739992 | orchestrator | 00:01:32.739 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.740039 | orchestrator | 00:01:32.740 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.740068 | orchestrator | 00:01:32.740 STDOUT terraform:  } 2025-09-19 00:01:32.740144 | orchestrator | 00:01:32.740 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-19 00:01:32.740217 | orchestrator | 00:01:32.740 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 00:01:32.740298 | orchestrator | 00:01:32.740 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.740360 | orchestrator | 00:01:32.740 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.740419 | orchestrator | 00:01:32.740 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.740475 | orchestrator | 00:01:32.740 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.740528 | orchestrator | 00:01:32.740 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.740603 | orchestrator | 00:01:32.740 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-19 00:01:32.740661 | orchestrator | 00:01:32.740 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.742575 | orchestrator | 00:01:32.742 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.742678 | orchestrator | 00:01:32.742 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.742727 | orchestrator | 00:01:32.742 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.742774 | orchestrator | 00:01:32.742 STDOUT terraform:  } 2025-09-19 00:01:32.742848 | orchestrator | 00:01:32.742 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-19 00:01:32.742924 | orchestrator | 00:01:32.742 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 00:01:32.742982 | orchestrator | 00:01:32.742 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.743024 | orchestrator | 00:01:32.742 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.743079 | orchestrator | 00:01:32.743 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.743132 | orchestrator | 00:01:32.743 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.743265 | orchestrator | 00:01:32.743 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.743499 | orchestrator | 00:01:32.743 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-19 00:01:32.743819 | orchestrator | 00:01:32.743 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.744468 | orchestrator | 00:01:32.744 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.744492 | orchestrator | 00:01:32.744 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.744499 | orchestrator | 00:01:32.744 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.744516 | orchestrator | 00:01:32.744 STDOUT terraform:  } 2025-09-19 00:01:32.744571 | orchestrator | 00:01:32.744 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-19 00:01:32.744621 | orchestrator | 00:01:32.744 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 00:01:32.744655 | orchestrator | 00:01:32.744 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.744680 | orchestrator | 00:01:32.744 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.744713 | orchestrator | 00:01:32.744 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.744763 | orchestrator | 00:01:32.744 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.744798 | orchestrator | 00:01:32.744 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.744843 | orchestrator | 00:01:32.744 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-19 00:01:32.744890 | orchestrator | 00:01:32.744 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.744897 | orchestrator | 00:01:32.744 STDOUT terraform:  + size = 80 2025-09-19 00:01:32.744929 | orchestrator | 00:01:32.744 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.744936 | orchestrator | 00:01:32.744 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.744951 | orchestrator | 00:01:32.744 STDOUT terraform:  } 2025-09-19 00:01:32.745080 | orchestrator | 00:01:32.744 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-19 00:01:32.745219 | orchestrator | 00:01:32.744 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.745299 | orchestrator | 00:01:32.745 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.745349 | orchestrator | 00:01:32.745 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.745354 | orchestrator | 00:01:32.745 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.745359 | orchestrator | 00:01:32.745 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.745366 | orchestrator | 00:01:32.745 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-19 00:01:32.745371 | orchestrator | 00:01:32.745 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.745408 | orchestrator | 00:01:32.745 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.745444 | orchestrator | 00:01:32.745 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.745511 | orchestrator | 00:01:32.745 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.745518 | orchestrator | 00:01:32.745 STDOUT terraform:  } 2025-09-19 00:01:32.745524 | orchestrator | 00:01:32.745 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-19 00:01:32.745593 | orchestrator | 00:01:32.745 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.745676 | orchestrator | 00:01:32.745 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.745693 | orchestrator | 00:01:32.745 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.745699 | orchestrator | 00:01:32.745 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.745721 | orchestrator | 00:01:32.745 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.745812 | orchestrator | 00:01:32.745 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-19 00:01:32.745821 | orchestrator | 00:01:32.745 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.746048 | orchestrator | 00:01:32.745 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.746063 | orchestrator | 00:01:32.745 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.746068 | orchestrator | 00:01:32.745 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.746073 | orchestrator | 00:01:32.745 STDOUT terraform:  } 2025-09-19 00:01:32.746098 | orchestrator | 00:01:32.745 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-19 00:01:32.746107 | orchestrator | 00:01:32.745 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.746114 | orchestrator | 00:01:32.746 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.746155 | orchestrator | 00:01:32.746 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.746175 | orchestrator | 00:01:32.746 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.746181 | orchestrator | 00:01:32.746 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.746226 | orchestrator | 00:01:32.746 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-19 00:01:32.746233 | orchestrator | 00:01:32.746 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.746250 | orchestrator | 00:01:32.746 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.746304 | orchestrator | 00:01:32.746 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.746310 | orchestrator | 00:01:32.746 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.746317 | orchestrator | 00:01:32.746 STDOUT terraform:  } 2025-09-19 00:01:32.746370 | orchestrator | 00:01:32.746 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-19 00:01:32.746437 | orchestrator | 00:01:32.746 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.746444 | orchestrator | 00:01:32.746 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.746450 | orchestrator | 00:01:32.746 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.746512 | orchestrator | 00:01:32.746 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.746519 | orchestrator | 00:01:32.746 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.746572 | orchestrator | 00:01:32.746 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-19 00:01:32.746580 | orchestrator | 00:01:32.746 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.746608 | orchestrator | 00:01:32.746 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.746625 | orchestrator | 00:01:32.746 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.746632 | orchestrator | 00:01:32.746 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.746654 | orchestrator | 00:01:32.746 STDOUT terraform:  } 2025-09-19 00:01:32.746701 | orchestrator | 00:01:32.746 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-19 00:01:32.746791 | orchestrator | 00:01:32.746 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.746800 | orchestrator | 00:01:32.746 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.746807 | orchestrator | 00:01:32.746 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.746842 | orchestrator | 00:01:32.746 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.746889 | orchestrator | 00:01:32.746 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.746919 | orchestrator | 00:01:32.746 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-19 00:01:32.746974 | orchestrator | 00:01:32.746 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.746995 | orchestrator | 00:01:32.746 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.747002 | orchestrator | 00:01:32.746 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.747007 | orchestrator | 00:01:32.746 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.747013 | orchestrator | 00:01:32.747 STDOUT terraform:  } 2025-09-19 00:01:32.747065 | orchestrator | 00:01:32.747 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-19 00:01:32.747114 | orchestrator | 00:01:32.747 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.747171 | orchestrator | 00:01:32.747 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.747186 | orchestrator | 00:01:32.747 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.747192 | orchestrator | 00:01:32.747 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.747246 | orchestrator | 00:01:32.747 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.747256 | orchestrator | 00:01:32.747 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-19 00:01:32.747309 | orchestrator | 00:01:32.747 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.747318 | orchestrator | 00:01:32.747 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.747324 | orchestrator | 00:01:32.747 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.747386 | orchestrator | 00:01:32.747 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.747395 | orchestrator | 00:01:32.747 STDOUT terraform:  } 2025-09-19 00:01:32.747401 | orchestrator | 00:01:32.747 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-19 00:01:32.747462 | orchestrator | 00:01:32.747 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.747469 | orchestrator | 00:01:32.747 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.747515 | orchestrator | 00:01:32.747 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.747522 | orchestrator | 00:01:32.747 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.747581 | orchestrator | 00:01:32.747 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.747588 | orchestrator | 00:01:32.747 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-19 00:01:32.747653 | orchestrator | 00:01:32.747 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.747659 | orchestrator | 00:01:32.747 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.747665 | orchestrator | 00:01:32.747 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.747700 | orchestrator | 00:01:32.747 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.747709 | orchestrator | 00:01:32.747 STDOUT terraform:  } 2025-09-19 00:01:32.747740 | orchestrator | 00:01:32.747 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-19 00:01:32.747815 | orchestrator | 00:01:32.747 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.747851 | orchestrator | 00:01:32.747 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.747865 | orchestrator | 00:01:32.747 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.747906 | orchestrator | 00:01:32.747 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.747968 | orchestrator | 00:01:32.747 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.747979 | orchestrator | 00:01:32.747 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-19 00:01:32.748011 | orchestrator | 00:01:32.747 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.748021 | orchestrator | 00:01:32.748 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.748086 | orchestrator | 00:01:32.748 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.748094 | orchestrator | 00:01:32.748 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.748098 | orchestrator | 00:01:32.748 STDOUT terraform:  } 2025-09-19 00:01:32.748125 | orchestrator | 00:01:32.748 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-19 00:01:32.748173 | orchestrator | 00:01:32.748 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 00:01:32.748199 | orchestrator | 00:01:32.748 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 00:01:32.748240 | orchestrator | 00:01:32.748 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.748270 | orchestrator | 00:01:32.748 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.748298 | orchestrator | 00:01:32.748 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 00:01:32.748355 | orchestrator | 00:01:32.748 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-19 00:01:32.748364 | orchestrator | 00:01:32.748 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.748377 | orchestrator | 00:01:32.748 STDOUT terraform:  + size = 20 2025-09-19 00:01:32.748405 | orchestrator | 00:01:32.748 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 00:01:32.748463 | orchestrator | 00:01:32.748 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 00:01:32.748473 | orchestrator | 00:01:32.748 STDOUT terraform:  } 2025-09-19 00:01:32.748574 | orchestrator | 00:01:32.748 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-19 00:01:32.748608 | orchestrator | 00:01:32.748 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-19 00:01:32.748642 | orchestrator | 00:01:32.748 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.748676 | orchestrator | 00:01:32.748 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.748709 | orchestrator | 00:01:32.748 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.748784 | orchestrator | 00:01:32.748 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.748791 | orchestrator | 00:01:32.748 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.748803 | orchestrator | 00:01:32.748 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.748825 | orchestrator | 00:01:32.748 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.748859 | orchestrator | 00:01:32.748 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.748889 | orchestrator | 00:01:32.748 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-19 00:01:32.748911 | orchestrator | 00:01:32.748 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.748943 | orchestrator | 00:01:32.748 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.748969 | orchestrator | 00:01:32.748 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.749008 | orchestrator | 00:01:32.748 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.749043 | orchestrator | 00:01:32.749 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.749066 | orchestrator | 00:01:32.749 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.749097 | orchestrator | 00:01:32.749 STDOUT terraform:  + name = "testbed-manager" 2025-09-19 00:01:32.749121 | orchestrator | 00:01:32.749 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.749156 | orchestrator | 00:01:32.749 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.749188 | orchestrator | 00:01:32.749 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.749209 | orchestrator | 00:01:32.749 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.749244 | orchestrator | 00:01:32.749 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.749274 | orchestrator | 00:01:32.749 STDOUT terraform:  + user_data = (sensitive value) 2025-09-19 00:01:32.749291 | orchestrator | 00:01:32.749 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.749306 | orchestrator | 00:01:32.749 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.749337 | orchestrator | 00:01:32.749 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.749365 | orchestrator | 00:01:32.749 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.749393 | orchestrator | 00:01:32.749 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.749422 | orchestrator | 00:01:32.749 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.749459 | orchestrator | 00:01:32.749 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.749465 | orchestrator | 00:01:32.749 STDOUT terraform:  } 2025-09-19 00:01:32.749480 | orchestrator | 00:01:32.749 STDOUT terraform:  + network { 2025-09-19 00:01:32.749503 | orchestrator | 00:01:32.749 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.749531 | orchestrator | 00:01:32.749 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.749561 | orchestrator | 00:01:32.749 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.749592 | orchestrator | 00:01:32.749 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.749622 | orchestrator | 00:01:32.749 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.749656 | orchestrator | 00:01:32.749 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.749685 | orchestrator | 00:01:32.749 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.749692 | orchestrator | 00:01:32.749 STDOUT terraform:  } 2025-09-19 00:01:32.749698 | orchestrator | 00:01:32.749 STDOUT terraform:  } 2025-09-19 00:01:32.749744 | orchestrator | 00:01:32.749 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-19 00:01:32.749814 | orchestrator | 00:01:32.749 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 00:01:32.749848 | orchestrator | 00:01:32.749 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.749887 | orchestrator | 00:01:32.749 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.749917 | orchestrator | 00:01:32.749 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.749952 | orchestrator | 00:01:32.749 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.749975 | orchestrator | 00:01:32.749 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.749999 | orchestrator | 00:01:32.749 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.750047 | orchestrator | 00:01:32.749 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.750079 | orchestrator | 00:01:32.750 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.750108 | orchestrator | 00:01:32.750 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 00:01:32.750131 | orchestrator | 00:01:32.750 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.750173 | orchestrator | 00:01:32.750 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.750201 | orchestrator | 00:01:32.750 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.750235 | orchestrator | 00:01:32.750 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.750269 | orchestrator | 00:01:32.750 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.750293 | orchestrator | 00:01:32.750 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.750323 | orchestrator | 00:01:32.750 STDOUT terraform:  + name = "testbed-node-0" 2025-09-19 00:01:32.750347 | orchestrator | 00:01:32.750 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.750383 | orchestrator | 00:01:32.750 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.750417 | orchestrator | 00:01:32.750 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.750442 | orchestrator | 00:01:32.750 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.750476 | orchestrator | 00:01:32.750 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.750522 | orchestrator | 00:01:32.750 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 00:01:32.750545 | orchestrator | 00:01:32.750 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.750568 | orchestrator | 00:01:32.750 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.750595 | orchestrator | 00:01:32.750 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.750624 | orchestrator | 00:01:32.750 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.750651 | orchestrator | 00:01:32.750 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.750679 | orchestrator | 00:01:32.750 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.750717 | orchestrator | 00:01:32.750 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.750724 | orchestrator | 00:01:32.750 STDOUT terraform:  } 2025-09-19 00:01:32.750741 | orchestrator | 00:01:32.750 STDOUT terraform:  + network { 2025-09-19 00:01:32.750774 | orchestrator | 00:01:32.750 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.750804 | orchestrator | 00:01:32.750 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.750834 | orchestrator | 00:01:32.750 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.750861 | orchestrator | 00:01:32.750 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.750895 | orchestrator | 00:01:32.750 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.750925 | orchestrator | 00:01:32.750 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.750956 | orchestrator | 00:01:32.750 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.750962 | orchestrator | 00:01:32.750 STDOUT terraform:  } 2025-09-19 00:01:32.750978 | orchestrator | 00:01:32.750 STDOUT terraform:  } 2025-09-19 00:01:32.751021 | orchestrator | 00:01:32.750 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-19 00:01:32.751062 | orchestrator | 00:01:32.751 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 00:01:32.751095 | orchestrator | 00:01:32.751 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.751128 | orchestrator | 00:01:32.751 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.751164 | orchestrator | 00:01:32.751 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.751203 | orchestrator | 00:01:32.751 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.751221 | orchestrator | 00:01:32.751 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.751243 | orchestrator | 00:01:32.751 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.751276 | orchestrator | 00:01:32.751 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.751310 | orchestrator | 00:01:32.751 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.751338 | orchestrator | 00:01:32.751 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 00:01:32.751363 | orchestrator | 00:01:32.751 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.751398 | orchestrator | 00:01:32.751 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.751432 | orchestrator | 00:01:32.751 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.751465 | orchestrator | 00:01:32.751 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.751500 | orchestrator | 00:01:32.751 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.751524 | orchestrator | 00:01:32.751 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.751554 | orchestrator | 00:01:32.751 STDOUT terraform:  + name = "testbed-node-1" 2025-09-19 00:01:32.751575 | orchestrator | 00:01:32.751 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.751613 | orchestrator | 00:01:32.751 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.751643 | orchestrator | 00:01:32.751 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.751665 | orchestrator | 00:01:32.751 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.751699 | orchestrator | 00:01:32.751 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.751746 | orchestrator | 00:01:32.751 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 00:01:32.751766 | orchestrator | 00:01:32.751 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.751796 | orchestrator | 00:01:32.751 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.751824 | orchestrator | 00:01:32.751 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.751850 | orchestrator | 00:01:32.751 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.751877 | orchestrator | 00:01:32.751 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.751906 | orchestrator | 00:01:32.751 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.751949 | orchestrator | 00:01:32.751 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.751954 | orchestrator | 00:01:32.751 STDOUT terraform:  } 2025-09-19 00:01:32.751959 | orchestrator | 00:01:32.751 STDOUT terraform:  + network { 2025-09-19 00:01:32.751981 | orchestrator | 00:01:32.751 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.752013 | orchestrator | 00:01:32.751 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.752043 | orchestrator | 00:01:32.752 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.752073 | orchestrator | 00:01:32.752 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.752105 | orchestrator | 00:01:32.752 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.752133 | orchestrator | 00:01:32.752 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.752166 | orchestrator | 00:01:32.752 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.752173 | orchestrator | 00:01:32.752 STDOUT terraform:  } 2025-09-19 00:01:32.752191 | orchestrator | 00:01:32.752 STDOUT terraform:  } 2025-09-19 00:01:32.752233 | orchestrator | 00:01:32.752 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-19 00:01:32.752278 | orchestrator | 00:01:32.752 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 00:01:32.752314 | orchestrator | 00:01:32.752 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.752346 | orchestrator | 00:01:32.752 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.752376 | orchestrator | 00:01:32.752 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.752409 | orchestrator | 00:01:32.752 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.752431 | orchestrator | 00:01:32.752 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.752454 | orchestrator | 00:01:32.752 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.752497 | orchestrator | 00:01:32.752 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.752525 | orchestrator | 00:01:32.752 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.752551 | orchestrator | 00:01:32.752 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 00:01:32.752573 | orchestrator | 00:01:32.752 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.752605 | orchestrator | 00:01:32.752 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.752638 | orchestrator | 00:01:32.752 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.752790 | orchestrator | 00:01:32.752 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.752799 | orchestrator | 00:01:32.752 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.752803 | orchestrator | 00:01:32.752 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.752808 | orchestrator | 00:01:32.752 STDOUT terraform:  + name = "testbed-node-2" 2025-09-19 00:01:32.752852 | orchestrator | 00:01:32.752 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.752917 | orchestrator | 00:01:32.752 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.753105 | orchestrator | 00:01:32.752 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.753142 | orchestrator | 00:01:32.753 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.753163 | orchestrator | 00:01:32.753 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.753213 | orchestrator | 00:01:32.753 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 00:01:32.753342 | orchestrator | 00:01:32.753 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.753430 | orchestrator | 00:01:32.753 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.753492 | orchestrator | 00:01:32.753 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.753591 | orchestrator | 00:01:32.753 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.753635 | orchestrator | 00:01:32.753 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.753640 | orchestrator | 00:01:32.753 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.753645 | orchestrator | 00:01:32.753 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.753664 | orchestrator | 00:01:32.753 STDOUT terraform:  } 2025-09-19 00:01:32.753675 | orchestrator | 00:01:32.753 STDOUT terraform:  + network { 2025-09-19 00:01:32.753689 | orchestrator | 00:01:32.753 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.753700 | orchestrator | 00:01:32.753 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.753717 | orchestrator | 00:01:32.753 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.753721 | orchestrator | 00:01:32.753 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.753725 | orchestrator | 00:01:32.753 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.753729 | orchestrator | 00:01:32.753 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.753732 | orchestrator | 00:01:32.753 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.753765 | orchestrator | 00:01:32.753 STDOUT terraform:  } 2025-09-19 00:01:32.753783 | orchestrator | 00:01:32.753 STDOUT terraform:  } 2025-09-19 00:01:32.753799 | orchestrator | 00:01:32.753 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-19 00:01:32.753810 | orchestrator | 00:01:32.753 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 00:01:32.753852 | orchestrator | 00:01:32.753 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.753864 | orchestrator | 00:01:32.753 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.753868 | orchestrator | 00:01:32.753 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.753910 | orchestrator | 00:01:32.753 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.753915 | orchestrator | 00:01:32.753 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.753919 | orchestrator | 00:01:32.753 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.753986 | orchestrator | 00:01:32.753 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.753990 | orchestrator | 00:01:32.753 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.754062 | orchestrator | 00:01:32.753 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 00:01:32.754070 | orchestrator | 00:01:32.753 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.754074 | orchestrator | 00:01:32.754 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.755106 | orchestrator | 00:01:32.754 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.755185 | orchestrator | 00:01:32.755 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.755235 | orchestrator | 00:01:32.755 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.755309 | orchestrator | 00:01:32.755 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.755403 | orchestrator | 00:01:32.755 STDOUT terraform:  + name = "testbed-node-3" 2025-09-19 00:01:32.755488 | orchestrator | 00:01:32.755 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.755617 | orchestrator | 00:01:32.755 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.755724 | orchestrator | 00:01:32.755 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.755811 | orchestrator | 00:01:32.755 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.755905 | orchestrator | 00:01:32.755 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.756013 | orchestrator | 00:01:32.755 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 00:01:32.756101 | orchestrator | 00:01:32.756 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.756146 | orchestrator | 00:01:32.756 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.756249 | orchestrator | 00:01:32.756 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.756335 | orchestrator | 00:01:32.756 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.756503 | orchestrator | 00:01:32.756 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.756558 | orchestrator | 00:01:32.756 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.756716 | orchestrator | 00:01:32.756 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.756853 | orchestrator | 00:01:32.756 STDOUT terraform:  } 2025-09-19 00:01:32.756907 | orchestrator | 00:01:32.756 STDOUT terraform:  + network { 2025-09-19 00:01:32.756996 | orchestrator | 00:01:32.756 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.757127 | orchestrator | 00:01:32.757 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.757200 | orchestrator | 00:01:32.757 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.757844 | orchestrator | 00:01:32.757 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.757943 | orchestrator | 00:01:32.757 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.758082 | orchestrator | 00:01:32.757 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.758161 | orchestrator | 00:01:32.758 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.758202 | orchestrator | 00:01:32.758 STDOUT terraform:  } 2025-09-19 00:01:32.758497 | orchestrator | 00:01:32.758 STDOUT terraform:  } 2025-09-19 00:01:32.758664 | orchestrator | 00:01:32.758 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-19 00:01:32.758817 | orchestrator | 00:01:32.758 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 00:01:32.758946 | orchestrator | 00:01:32.758 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.759116 | orchestrator | 00:01:32.758 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.759171 | orchestrator | 00:01:32.759 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.759214 | orchestrator | 00:01:32.759 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.759248 | orchestrator | 00:01:32.759 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.759294 | orchestrator | 00:01:32.759 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.759344 | orchestrator | 00:01:32.759 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.759490 | orchestrator | 00:01:32.759 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.759605 | orchestrator | 00:01:32.759 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 00:01:32.759814 | orchestrator | 00:01:32.759 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.759957 | orchestrator | 00:01:32.759 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.760113 | orchestrator | 00:01:32.759 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.760286 | orchestrator | 00:01:32.760 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.760422 | orchestrator | 00:01:32.760 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.760623 | orchestrator | 00:01:32.760 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.760812 | orchestrator | 00:01:32.760 STDOUT terraform:  + name = "testbed-node-4" 2025-09-19 00:01:32.760866 | orchestrator | 00:01:32.760 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.761011 | orchestrator | 00:01:32.760 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.761094 | orchestrator | 00:01:32.761 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.761224 | orchestrator | 00:01:32.761 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.761365 | orchestrator | 00:01:32.761 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.761550 | orchestrator | 00:01:32.761 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 00:01:32.761608 | orchestrator | 00:01:32.761 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.761879 | orchestrator | 00:01:32.761 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.762156 | orchestrator | 00:01:32.761 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.762255 | orchestrator | 00:01:32.762 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.762360 | orchestrator | 00:01:32.762 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.762498 | orchestrator | 00:01:32.762 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.762616 | orchestrator | 00:01:32.762 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.762670 | orchestrator | 00:01:32.762 STDOUT terraform:  } 2025-09-19 00:01:32.762728 | orchestrator | 00:01:32.762 STDOUT terraform:  + network { 2025-09-19 00:01:32.762804 | orchestrator | 00:01:32.762 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.762843 | orchestrator | 00:01:32.762 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.762883 | orchestrator | 00:01:32.762 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.762979 | orchestrator | 00:01:32.762 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.763093 | orchestrator | 00:01:32.763 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.763183 | orchestrator | 00:01:32.763 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.763337 | orchestrator | 00:01:32.763 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.763412 | orchestrator | 00:01:32.763 STDOUT terraform:  } 2025-09-19 00:01:32.763545 | orchestrator | 00:01:32.763 STDOUT terraform:  } 2025-09-19 00:01:32.763669 | orchestrator | 00:01:32.763 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-19 00:01:32.763809 | orchestrator | 00:01:32.763 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 00:01:32.763952 | orchestrator | 00:01:32.763 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 00:01:32.764086 | orchestrator | 00:01:32.763 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 00:01:32.764240 | orchestrator | 00:01:32.764 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 00:01:32.764375 | orchestrator | 00:01:32.764 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.764449 | orchestrator | 00:01:32.764 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 00:01:32.764499 | orchestrator | 00:01:32.764 STDOUT terraform:  + config_drive = true 2025-09-19 00:01:32.764575 | orchestrator | 00:01:32.764 STDOUT terraform:  + created = (known after apply) 2025-09-19 00:01:32.764680 | orchestrator | 00:01:32.764 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 00:01:32.764855 | orchestrator | 00:01:32.764 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 00:01:32.764954 | orchestrator | 00:01:32.764 STDOUT terraform:  + force_delete = false 2025-09-19 00:01:32.765062 | orchestrator | 00:01:32.764 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 00:01:32.765183 | orchestrator | 00:01:32.765 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.765291 | orchestrator | 00:01:32.765 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 00:01:32.765387 | orchestrator | 00:01:32.765 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 00:01:32.765471 | orchestrator | 00:01:32.765 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 00:01:32.765558 | orchestrator | 00:01:32.765 STDOUT terraform:  + name = "testbed-node-5" 2025-09-19 00:01:32.765657 | orchestrator | 00:01:32.765 STDOUT terraform:  + power_state = "active" 2025-09-19 00:01:32.765827 | orchestrator | 00:01:32.765 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.765980 | orchestrator | 00:01:32.765 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 00:01:32.766056 | orchestrator | 00:01:32.766 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 00:01:32.766151 | orchestrator | 00:01:32.766 STDOUT terraform:  + updated = (known after apply) 2025-09-19 00:01:32.766322 | orchestrator | 00:01:32.766 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 00:01:32.766407 | orchestrator | 00:01:32.766 STDOUT terraform:  + block_device { 2025-09-19 00:01:32.766480 | orchestrator | 00:01:32.766 STDOUT terraform:  + boot_index = 0 2025-09-19 00:01:32.766556 | orchestrator | 00:01:32.766 STDOUT terraform:  + delete_on_termination = false 2025-09-19 00:01:32.766659 | orchestrator | 00:01:32.766 STDOUT terraform:  + destination_type = "volume" 2025-09-19 00:01:32.766789 | orchestrator | 00:01:32.766 STDOUT terraform:  + multiattach = false 2025-09-19 00:01:32.766855 | orchestrator | 00:01:32.766 STDOUT terraform:  + source_type = "volume" 2025-09-19 00:01:32.766991 | orchestrator | 00:01:32.766 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.767068 | orchestrator | 00:01:32.767 STDOUT terraform:  } 2025-09-19 00:01:32.767130 | orchestrator | 00:01:32.767 STDOUT terraform:  + network { 2025-09-19 00:01:32.767197 | orchestrator | 00:01:32.767 STDOUT terraform:  + access_network = false 2025-09-19 00:01:32.767238 | orchestrator | 00:01:32.767 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 00:01:32.767394 | orchestrator | 00:01:32.767 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 00:01:32.767461 | orchestrator | 00:01:32.767 STDOUT terraform:  + mac = (known after apply) 2025-09-19 00:01:32.767502 | orchestrator | 00:01:32.767 STDOUT terraform:  + name = (known after apply) 2025-09-19 00:01:32.767553 | orchestrator | 00:01:32.767 STDOUT terraform:  + port = (known after apply) 2025-09-19 00:01:32.767595 | orchestrator | 00:01:32.767 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 00:01:32.768558 | orchestrator | 00:01:32.767 STDOUT terraform:  } 2025-09-19 00:01:32.768613 | orchestrator | 00:01:32.768 STDOUT terraform:  } 2025-09-19 00:01:32.768661 | orchestrator | 00:01:32.768 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-19 00:01:32.768705 | orchestrator | 00:01:32.768 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-19 00:01:32.768743 | orchestrator | 00:01:32.768 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-19 00:01:32.768802 | orchestrator | 00:01:32.768 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.768834 | orchestrator | 00:01:32.768 STDOUT terraform:  + name = "testbed" 2025-09-19 00:01:32.768866 | orchestrator | 00:01:32.768 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 00:01:32.768900 | orchestrator | 00:01:32.768 STDOUT terraform:  + public_key = (known after apply) 2025-09-19 00:01:32.768939 | orchestrator | 00:01:32.768 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.768979 | orchestrator | 00:01:32.768 STDOUT terraform:  + user_id = (known after apply) 2025-09-19 00:01:32.769000 | orchestrator | 00:01:32.768 STDOUT terraform:  } 2025-09-19 00:01:32.769058 | orchestrator | 00:01:32.769 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-19 00:01:32.769142 | orchestrator | 00:01:32.769 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.769183 | orchestrator | 00:01:32.769 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.769219 | orchestrator | 00:01:32.769 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.769264 | orchestrator | 00:01:32.769 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.769299 | orchestrator | 00:01:32.769 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.769335 | orchestrator | 00:01:32.769 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.769363 | orchestrator | 00:01:32.769 STDOUT terraform:  } 2025-09-19 00:01:32.769424 | orchestrator | 00:01:32.769 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-19 00:01:32.769478 | orchestrator | 00:01:32.769 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.769513 | orchestrator | 00:01:32.769 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.769548 | orchestrator | 00:01:32.769 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.769582 | orchestrator | 00:01:32.769 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.769617 | orchestrator | 00:01:32.769 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.769650 | orchestrator | 00:01:32.769 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.769671 | orchestrator | 00:01:32.769 STDOUT terraform:  } 2025-09-19 00:01:32.769726 | orchestrator | 00:01:32.769 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-19 00:01:32.769815 | orchestrator | 00:01:32.769 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.769859 | orchestrator | 00:01:32.769 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.769906 | orchestrator | 00:01:32.769 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.769961 | orchestrator | 00:01:32.769 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.769999 | orchestrator | 00:01:32.769 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.770058 | orchestrator | 00:01:32.770 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.770113 | orchestrator | 00:01:32.770 STDOUT terraform:  } 2025-09-19 00:01:32.770170 | orchestrator | 00:01:32.770 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-19 00:01:32.770243 | orchestrator | 00:01:32.770 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.770295 | orchestrator | 00:01:32.770 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.770347 | orchestrator | 00:01:32.770 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.770401 | orchestrator | 00:01:32.770 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.770459 | orchestrator | 00:01:32.770 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.770512 | orchestrator | 00:01:32.770 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.770536 | orchestrator | 00:01:32.770 STDOUT terraform:  } 2025-09-19 00:01:32.770610 | orchestrator | 00:01:32.770 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-19 00:01:32.770692 | orchestrator | 00:01:32.770 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.770745 | orchestrator | 00:01:32.770 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.770834 | orchestrator | 00:01:32.770 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.770909 | orchestrator | 00:01:32.770 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.770987 | orchestrator | 00:01:32.770 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.771027 | orchestrator | 00:01:32.770 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.771067 | orchestrator | 00:01:32.771 STDOUT terraform:  } 2025-09-19 00:01:32.771171 | orchestrator | 00:01:32.771 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-19 00:01:32.771245 | orchestrator | 00:01:32.771 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.771297 | orchestrator | 00:01:32.771 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.771333 | orchestrator | 00:01:32.771 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.771382 | orchestrator | 00:01:32.771 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.771421 | orchestrator | 00:01:32.771 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.771471 | orchestrator | 00:01:32.771 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.771492 | orchestrator | 00:01:32.771 STDOUT terraform:  } 2025-09-19 00:01:32.771562 | orchestrator | 00:01:32.771 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-19 00:01:32.771634 | orchestrator | 00:01:32.771 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.771684 | orchestrator | 00:01:32.771 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.771720 | orchestrator | 00:01:32.771 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.771785 | orchestrator | 00:01:32.771 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.771822 | orchestrator | 00:01:32.771 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.771873 | orchestrator | 00:01:32.771 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.771894 | orchestrator | 00:01:32.771 STDOUT terraform:  } 2025-09-19 00:01:32.771965 | orchestrator | 00:01:32.771 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-19 00:01:32.772036 | orchestrator | 00:01:32.771 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.772085 | orchestrator | 00:01:32.772 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.772122 | orchestrator | 00:01:32.772 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.772172 | orchestrator | 00:01:32.772 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.772207 | orchestrator | 00:01:32.772 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.772264 | orchestrator | 00:01:32.772 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.772285 | orchestrator | 00:01:32.772 STDOUT terraform:  } 2025-09-19 00:01:32.772357 | orchestrator | 00:01:32.772 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-19 00:01:32.772427 | orchestrator | 00:01:32.772 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 00:01:32.772496 | orchestrator | 00:01:32.772 STDOUT terraform:  + device = (known after apply) 2025-09-19 00:01:32.772536 | orchestrator | 00:01:32.772 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.772588 | orchestrator | 00:01:32.772 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 00:01:32.772638 | orchestrator | 00:01:32.772 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.772676 | orchestrator | 00:01:32.772 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 00:01:32.772710 | orchestrator | 00:01:32.772 STDOUT terraform:  } 2025-09-19 00:01:32.772808 | orchestrator | 00:01:32.772 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-19 00:01:32.772890 | orchestrator | 00:01:32.772 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-19 00:01:32.772927 | orchestrator | 00:01:32.772 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 00:01:32.772978 | orchestrator | 00:01:32.772 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-19 00:01:32.773029 | orchestrator | 00:01:32.772 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.773065 | orchestrator | 00:01:32.773 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 00:01:32.773115 | orchestrator | 00:01:32.773 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.773138 | orchestrator | 00:01:32.773 STDOUT terraform:  } 2025-09-19 00:01:32.773206 | orchestrator | 00:01:32.773 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-19 00:01:32.773278 | orchestrator | 00:01:32.773 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-19 00:01:32.773311 | orchestrator | 00:01:32.773 STDOUT terraform:  + address = (known after apply) 2025-09-19 00:01:32.773359 | orchestrator | 00:01:32.773 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.773391 | orchestrator | 00:01:32.773 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 00:01:32.773440 | orchestrator | 00:01:32.773 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.773490 | orchestrator | 00:01:32.773 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 00:01:32.773539 | orchestrator | 00:01:32.773 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.773598 | orchestrator | 00:01:32.773 STDOUT terraform:  + pool = "public" 2025-09-19 00:01:32.773670 | orchestrator | 00:01:32.773 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 00:01:32.773709 | orchestrator | 00:01:32.773 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.773781 | orchestrator | 00:01:32.773 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.773842 | orchestrator | 00:01:32.773 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.773864 | orchestrator | 00:01:32.773 STDOUT terraform:  } 2025-09-19 00:01:32.773932 | orchestrator | 00:01:32.773 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-19 00:01:32.773997 | orchestrator | 00:01:32.773 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-19 00:01:32.774195 | orchestrator | 00:01:32.774 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.774452 | orchestrator | 00:01:32.774 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.774578 | orchestrator | 00:01:32.774 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 00:01:32.774680 | orchestrator | 00:01:32.774 STDOUT terraform:  + "nova", 2025-09-19 00:01:32.774817 | orchestrator | 00:01:32.774 STDOUT terraform:  ] 2025-09-19 00:01:32.774979 | orchestrator | 00:01:32.774 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 00:01:32.775031 | orchestrator | 00:01:32.774 STDOUT terraform:  + external = (known after apply) 2025-09-19 00:01:32.775079 | orchestrator | 00:01:32.775 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.775125 | orchestrator | 00:01:32.775 STDOUT terraform:  + mtu = (known after apply) 2025-09-19 00:01:32.775172 | orchestrator | 00:01:32.775 STDOUT terraform:  + name = "net-testbed-management" 2025-09-19 00:01:32.775218 | orchestrator | 00:01:32.775 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.775262 | orchestrator | 00:01:32.775 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.775306 | orchestrator | 00:01:32.775 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.775373 | orchestrator | 00:01:32.775 STDOUT terraform:  + shared = (known after apply) 2025-09-19 00:01:32.775429 | orchestrator | 00:01:32.775 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.775495 | orchestrator | 00:01:32.775 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-19 00:01:32.775529 | orchestrator | 00:01:32.775 STDOUT terraform:  + segments (known after apply) 2025-09-19 00:01:32.775566 | orchestrator | 00:01:32.775 STDOUT terraform:  } 2025-09-19 00:01:32.775646 | orchestrator | 00:01:32.775 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-19 00:01:32.775778 | orchestrator | 00:01:32.775 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-19 00:01:32.775874 | orchestrator | 00:01:32.775 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.775978 | orchestrator | 00:01:32.775 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.776066 | orchestrator | 00:01:32.775 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.776137 | orchestrator | 00:01:32.776 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.776236 | orchestrator | 00:01:32.776 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.776327 | orchestrator | 00:01:32.776 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.776511 | orchestrator | 00:01:32.776 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.776612 | orchestrator | 00:01:32.776 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.776676 | orchestrator | 00:01:32.776 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.776814 | orchestrator | 00:01:32.776 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.776866 | orchestrator | 00:01:32.776 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.776949 | orchestrator | 00:01:32.776 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.777065 | orchestrator | 00:01:32.776 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.777233 | orchestrator | 00:01:32.777 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.777386 | orchestrator | 00:01:32.777 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.777565 | orchestrator | 00:01:32.777 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.777597 | orchestrator | 00:01:32.777 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.777791 | orchestrator | 00:01:32.777 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.777894 | orchestrator | 00:01:32.777 STDOUT terraform:  } 2025-09-19 00:01:32.778027 | orchestrator | 00:01:32.777 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.778108 | orchestrator | 00:01:32.778 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.778272 | orchestrator | 00:01:32.778 STDOUT terraform:  } 2025-09-19 00:01:32.778386 | orchestrator | 00:01:32.778 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.778421 | orchestrator | 00:01:32.778 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.778545 | orchestrator | 00:01:32.778 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-19 00:01:32.778601 | orchestrator | 00:01:32.778 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.778634 | orchestrator | 00:01:32.778 STDOUT terraform:  } 2025-09-19 00:01:32.778715 | orchestrator | 00:01:32.778 STDOUT terraform:  } 2025-09-19 00:01:32.778871 | orchestrator | 00:01:32.778 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-19 00:01:32.779022 | orchestrator | 00:01:32.778 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 00:01:32.779299 | orchestrator | 00:01:32.779 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.779482 | orchestrator | 00:01:32.779 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.779648 | orchestrator | 00:01:32.779 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.779873 | orchestrator | 00:01:32.779 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.780036 | orchestrator | 00:01:32.779 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.780119 | orchestrator | 00:01:32.780 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.780220 | orchestrator | 00:01:32.780 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.780346 | orchestrator | 00:01:32.780 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.780483 | orchestrator | 00:01:32.780 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.780739 | orchestrator | 00:01:32.780 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.781019 | orchestrator | 00:01:32.780 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.781210 | orchestrator | 00:01:32.781 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.781465 | orchestrator | 00:01:32.781 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.781532 | orchestrator | 00:01:32.781 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.781580 | orchestrator | 00:01:32.781 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.781624 | orchestrator | 00:01:32.781 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.781654 | orchestrator | 00:01:32.781 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.781690 | orchestrator | 00:01:32.781 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.781713 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-19 00:01:32.781742 | orchestrator | 00:01:32.781 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.781796 | orchestrator | 00:01:32.781 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 00:01:32.781819 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-19 00:01:32.781848 | orchestrator | 00:01:32.781 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.781883 | orchestrator | 00:01:32.781 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.781906 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-19 00:01:32.781935 | orchestrator | 00:01:32.781 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.781970 | orchestrator | 00:01:32.781 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 00:01:32.781990 | orchestrator | 00:01:32.781 STDOUT terraform:  } 2025-09-19 00:01:32.782035 | orchestrator | 00:01:32.781 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.782104 | orchestrator | 00:01:32.782 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.782228 | orchestrator | 00:01:32.782 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-19 00:01:32.782344 | orchestrator | 00:01:32.782 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.782369 | orchestrator | 00:01:32.782 STDOUT terraform:  } 2025-09-19 00:01:32.782391 | orchestrator | 00:01:32.782 STDOUT terraform:  } 2025-09-19 00:01:32.782469 | orchestrator | 00:01:32.782 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-19 00:01:32.782527 | orchestrator | 00:01:32.782 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 00:01:32.782589 | orchestrator | 00:01:32.782 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.782657 | orchestrator | 00:01:32.782 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.782701 | orchestrator | 00:01:32.782 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.782797 | orchestrator | 00:01:32.782 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.782853 | orchestrator | 00:01:32.782 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.782909 | orchestrator | 00:01:32.782 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.782960 | orchestrator | 00:01:32.782 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.783009 | orchestrator | 00:01:32.782 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.783055 | orchestrator | 00:01:32.783 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.783130 | orchestrator | 00:01:32.783 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.783176 | orchestrator | 00:01:32.783 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.783222 | orchestrator | 00:01:32.783 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.783277 | orchestrator | 00:01:32.783 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.783325 | orchestrator | 00:01:32.783 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.783398 | orchestrator | 00:01:32.783 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.783453 | orchestrator | 00:01:32.783 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.783493 | orchestrator | 00:01:32.783 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.783539 | orchestrator | 00:01:32.783 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.783571 | orchestrator | 00:01:32.783 STDOUT terraform:  } 2025-09-19 00:01:32.783610 | orchestrator | 00:01:32.783 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.783659 | orchestrator | 00:01:32.783 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 00:01:32.783714 | orchestrator | 00:01:32.783 STDOUT terraform:  } 2025-09-19 00:01:32.783794 | orchestrator | 00:01:32.783 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.783943 | orchestrator | 00:01:32.783 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.783972 | orchestrator | 00:01:32.783 STDOUT terraform:  } 2025-09-19 00:01:32.784093 | orchestrator | 00:01:32.783 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.784234 | orchestrator | 00:01:32.784 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 00:01:32.784269 | orchestrator | 00:01:32.784 STDOUT terraform:  } 2025-09-19 00:01:32.784308 | orchestrator | 00:01:32.784 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.784348 | orchestrator | 00:01:32.784 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.784428 | orchestrator | 00:01:32.784 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-19 00:01:32.784475 | orchestrator | 00:01:32.784 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.784510 | orchestrator | 00:01:32.784 STDOUT terraform:  } 2025-09-19 00:01:32.784583 | orchestrator | 00:01:32.784 STDOUT terraform:  } 2025-09-19 00:01:32.784701 | orchestrator | 00:01:32.784 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-19 00:01:32.784782 | orchestrator | 00:01:32.784 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 00:01:32.784828 | orchestrator | 00:01:32.784 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.784872 | orchestrator | 00:01:32.784 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.785052 | orchestrator | 00:01:32.784 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.785164 | orchestrator | 00:01:32.785 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.785250 | orchestrator | 00:01:32.785 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.785576 | orchestrator | 00:01:32.785 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.785801 | orchestrator | 00:01:32.785 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.785959 | orchestrator | 00:01:32.785 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.786088 | orchestrator | 00:01:32.785 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.786153 | orchestrator | 00:01:32.786 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.786254 | orchestrator | 00:01:32.786 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.786386 | orchestrator | 00:01:32.786 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.786459 | orchestrator | 00:01:32.786 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.786603 | orchestrator | 00:01:32.786 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.786779 | orchestrator | 00:01:32.786 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.786867 | orchestrator | 00:01:32.786 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.786899 | orchestrator | 00:01:32.786 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.786936 | orchestrator | 00:01:32.786 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.786958 | orchestrator | 00:01:32.786 STDOUT terraform:  } 2025-09-19 00:01:32.787025 | orchestrator | 00:01:32.786 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.787143 | orchestrator | 00:01:32.787 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 00:01:32.787292 | orchestrator | 00:01:32.787 STDOUT terraform:  } 2025-09-19 00:01:32.787322 | orchestrator | 00:01:32.787 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.787417 | orchestrator | 00:01:32.787 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.787460 | orchestrator | 00:01:32.787 STDOUT terraform:  } 2025-09-19 00:01:32.787516 | orchestrator | 00:01:32.787 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.787628 | orchestrator | 00:01:32.787 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 00:01:32.787678 | orchestrator | 00:01:32.787 STDOUT terraform:  } 2025-09-19 00:01:32.788014 | orchestrator | 00:01:32.787 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.788096 | orchestrator | 00:01:32.788 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.788166 | orchestrator | 00:01:32.788 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-19 00:01:32.788274 | orchestrator | 00:01:32.788 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.788333 | orchestrator | 00:01:32.788 STDOUT terraform:  } 2025-09-19 00:01:32.788376 | orchestrator | 00:01:32.788 STDOUT terraform:  } 2025-09-19 00:01:32.788466 | orchestrator | 00:01:32.788 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-19 00:01:32.788532 | orchestrator | 00:01:32.788 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 00:01:32.788592 | orchestrator | 00:01:32.788 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.788792 | orchestrator | 00:01:32.788 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.788887 | orchestrator | 00:01:32.788 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.788963 | orchestrator | 00:01:32.788 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.789050 | orchestrator | 00:01:32.788 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.789133 | orchestrator | 00:01:32.789 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.789387 | orchestrator | 00:01:32.789 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.789624 | orchestrator | 00:01:32.789 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.789885 | orchestrator | 00:01:32.789 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.790105 | orchestrator | 00:01:32.789 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.790193 | orchestrator | 00:01:32.790 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.790276 | orchestrator | 00:01:32.790 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.790424 | orchestrator | 00:01:32.790 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.790504 | orchestrator | 00:01:32.790 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.790633 | orchestrator | 00:01:32.790 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.790801 | orchestrator | 00:01:32.790 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.790870 | orchestrator | 00:01:32.790 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.790941 | orchestrator | 00:01:32.790 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.791007 | orchestrator | 00:01:32.790 STDOUT terraform:  } 2025-09-19 00:01:32.791149 | orchestrator | 00:01:32.791 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.791202 | orchestrator | 00:01:32.791 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 00:01:32.791232 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-19 00:01:32.791276 | orchestrator | 00:01:32.791 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.791321 | orchestrator | 00:01:32.791 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.791418 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-19 00:01:32.791456 | orchestrator | 00:01:32.791 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.791501 | orchestrator | 00:01:32.791 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 00:01:32.791537 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-19 00:01:32.791577 | orchestrator | 00:01:32.791 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.791621 | orchestrator | 00:01:32.791 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.791797 | orchestrator | 00:01:32.791 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-19 00:01:32.791840 | orchestrator | 00:01:32.791 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.791862 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-19 00:01:32.791883 | orchestrator | 00:01:32.791 STDOUT terraform:  } 2025-09-19 00:01:32.791991 | orchestrator | 00:01:32.791 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-19 00:01:32.792135 | orchestrator | 00:01:32.792 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 00:01:32.792218 | orchestrator | 00:01:32.792 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.792327 | orchestrator | 00:01:32.792 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.792513 | orchestrator | 00:01:32.792 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.792612 | orchestrator | 00:01:32.792 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.792700 | orchestrator | 00:01:32.792 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.792769 | orchestrator | 00:01:32.792 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.792897 | orchestrator | 00:01:32.792 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.793019 | orchestrator | 00:01:32.792 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.793127 | orchestrator | 00:01:32.793 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.793248 | orchestrator | 00:01:32.793 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.793338 | orchestrator | 00:01:32.793 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.793588 | orchestrator | 00:01:32.793 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.793814 | orchestrator | 00:01:32.793 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.794073 | orchestrator | 00:01:32.793 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.794235 | orchestrator | 00:01:32.794 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.794324 | orchestrator | 00:01:32.794 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.794443 | orchestrator | 00:01:32.794 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.794597 | orchestrator | 00:01:32.794 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.794673 | orchestrator | 00:01:32.794 STDOUT terraform:  } 2025-09-19 00:01:32.794812 | orchestrator | 00:01:32.794 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.794917 | orchestrator | 00:01:32.794 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 00:01:32.794945 | orchestrator | 00:01:32.794 STDOUT terraform:  } 2025-09-19 00:01:32.794978 | orchestrator | 00:01:32.794 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.795033 | orchestrator | 00:01:32.794 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.795068 | orchestrator | 00:01:32.795 STDOUT terraform:  } 2025-09-19 00:01:32.795106 | orchestrator | 00:01:32.795 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.795169 | orchestrator | 00:01:32.795 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 00:01:32.795215 | orchestrator | 00:01:32.795 STDOUT terraform:  } 2025-09-19 00:01:32.795284 | orchestrator | 00:01:32.795 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.795325 | orchestrator | 00:01:32.795 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.795380 | orchestrator | 00:01:32.795 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-19 00:01:32.795423 | orchestrator | 00:01:32.795 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.795448 | orchestrator | 00:01:32.795 STDOUT terraform:  } 2025-09-19 00:01:32.795470 | orchestrator | 00:01:32.795 STDOUT terraform:  } 2025-09-19 00:01:32.795535 | orchestrator | 00:01:32.795 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-19 00:01:32.795602 | orchestrator | 00:01:32.795 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 00:01:32.795648 | orchestrator | 00:01:32.795 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.795705 | orchestrator | 00:01:32.795 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 00:01:32.795778 | orchestrator | 00:01:32.795 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 00:01:32.795876 | orchestrator | 00:01:32.795 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.795923 | orchestrator | 00:01:32.795 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 00:01:32.795984 | orchestrator | 00:01:32.795 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 00:01:32.796090 | orchestrator | 00:01:32.796 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 00:01:32.796388 | orchestrator | 00:01:32.796 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 00:01:32.796600 | orchestrator | 00:01:32.796 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.796724 | orchestrator | 00:01:32.796 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 00:01:32.796934 | orchestrator | 00:01:32.796 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.797100 | orchestrator | 00:01:32.796 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 00:01:32.797241 | orchestrator | 00:01:32.797 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 00:01:32.797495 | orchestrator | 00:01:32.797 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.797737 | orchestrator | 00:01:32.797 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 00:01:32.797939 | orchestrator | 00:01:32.797 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.798081 | orchestrator | 00:01:32.797 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.798411 | orchestrator | 00:01:32.798 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 00:01:32.798490 | orchestrator | 00:01:32.798 STDOUT terraform:  } 2025-09-19 00:01:32.798608 | orchestrator | 00:01:32.798 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.798785 | orchestrator | 00:01:32.798 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 00:01:32.798953 | orchestrator | 00:01:32.798 STDOUT terraform:  } 2025-09-19 00:01:32.798985 | orchestrator | 00:01:32.798 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.799167 | orchestrator | 00:01:32.799 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 00:01:32.799224 | orchestrator | 00:01:32.799 STDOUT terraform:  } 2025-09-19 00:01:32.799420 | orchestrator | 00:01:32.799 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 00:01:32.799508 | orchestrator | 00:01:32.799 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 00:01:32.799582 | orchestrator | 00:01:32.799 STDOUT terraform:  } 2025-09-19 00:01:32.799686 | orchestrator | 00:01:32.799 STDOUT terraform:  + binding (known after apply) 2025-09-19 00:01:32.800360 | orchestrator | 00:01:32.800 STDOUT terraform:  + fixed_ip { 2025-09-19 00:01:32.800410 | orchestrator | 00:01:32.800 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-19 00:01:32.800510 | orchestrator | 00:01:32.800 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.800542 | orchestrator | 00:01:32.800 STDOUT terraform:  } 2025-09-19 00:01:32.800566 | orchestrator | 00:01:32.800 STDOUT terraform:  } 2025-09-19 00:01:32.800637 | orchestrator | 00:01:32.800 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-19 00:01:32.800700 | orchestrator | 00:01:32.800 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-19 00:01:32.800732 | orchestrator | 00:01:32.800 STDOUT terraform:  + force_destroy = false 2025-09-19 00:01:32.800795 | orchestrator | 00:01:32.800 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.800849 | orchestrator | 00:01:32.800 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 00:01:32.800892 | orchestrator | 00:01:32.800 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.800931 | orchestrator | 00:01:32.800 STDOUT terraform:  + router_id = (known after apply) 2025-09-19 00:01:32.800970 | orchestrator | 00:01:32.800 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 00:01:32.800997 | orchestrator | 00:01:32.800 STDOUT terraform:  } 2025-09-19 00:01:32.801043 | orchestrator | 00:01:32.801 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-19 00:01:32.801098 | orchestrator | 00:01:32.801 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-19 00:01:32.801145 | orchestrator | 00:01:32.801 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 00:01:32.801196 | orchestrator | 00:01:32.801 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.801311 | orchestrator | 00:01:32.801 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 00:01:32.801375 | orchestrator | 00:01:32.801 STDOUT terraform:  + "nova", 2025-09-19 00:01:32.801442 | orchestrator | 00:01:32.801 STDOUT terraform:  ] 2025-09-19 00:01:32.801603 | orchestrator | 00:01:32.801 STDOUT terraform:  + distributed = (known after apply) 2025-09-19 00:01:32.801689 | orchestrator | 00:01:32.801 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-19 00:01:32.801893 | orchestrator | 00:01:32.801 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-19 00:01:32.802044 | orchestrator | 00:01:32.801 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-19 00:01:32.802149 | orchestrator | 00:01:32.802 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.802241 | orchestrator | 00:01:32.802 STDOUT terraform:  + name = "testbed" 2025-09-19 00:01:32.802298 | orchestrator | 00:01:32.802 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.802345 | orchestrator | 00:01:32.802 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.802386 | orchestrator | 00:01:32.802 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-19 00:01:32.802408 | orchestrator | 00:01:32.802 STDOUT terraform:  } 2025-09-19 00:01:32.802483 | orchestrator | 00:01:32.802 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-19 00:01:32.802570 | orchestrator | 00:01:32.802 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-19 00:01:32.802610 | orchestrator | 00:01:32.802 STDOUT terraform:  + description = "ssh" 2025-09-19 00:01:32.802674 | orchestrator | 00:01:32.802 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.802712 | orchestrator | 00:01:32.802 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.802781 | orchestrator | 00:01:32.802 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.802818 | orchestrator | 00:01:32.802 STDOUT terraform:  + port_range_max = 22 2025-09-19 00:01:32.802853 | orchestrator | 00:01:32.802 STDOUT terraform:  + port_range_min = 22 2025-09-19 00:01:32.802899 | orchestrator | 00:01:32.802 STDOUT terraform:  + protocol = "tcp" 2025-09-19 00:01:32.802952 | orchestrator | 00:01:32.802 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.803035 | orchestrator | 00:01:32.802 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.803084 | orchestrator | 00:01:32.803 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.803123 | orchestrator | 00:01:32.803 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.803190 | orchestrator | 00:01:32.803 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.803241 | orchestrator | 00:01:32.803 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.803324 | orchestrator | 00:01:32.803 STDOUT terraform:  } 2025-09-19 00:01:32.803469 | orchestrator | 00:01:32.803 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-19 00:01:32.803608 | orchestrator | 00:01:32.803 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-19 00:01:32.803745 | orchestrator | 00:01:32.803 STDOUT terraform:  + description = "wireguard" 2025-09-19 00:01:32.803802 | orchestrator | 00:01:32.803 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.803896 | orchestrator | 00:01:32.803 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.803965 | orchestrator | 00:01:32.803 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.804026 | orchestrator | 00:01:32.804 STDOUT terraform:  + port_range_max = 51820 2025-09-19 00:01:32.804071 | orchestrator | 00:01:32.804 STDOUT terraform:  + port_range_min = 51820 2025-09-19 00:01:32.804152 | orchestrator | 00:01:32.804 STDOUT terraform:  + protocol = "udp" 2025-09-19 00:01:32.804246 | orchestrator | 00:01:32.804 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.804316 | orchestrator | 00:01:32.804 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.804378 | orchestrator | 00:01:32.804 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.804428 | orchestrator | 00:01:32.804 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.804486 | orchestrator | 00:01:32.804 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.804545 | orchestrator | 00:01:32.804 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.804570 | orchestrator | 00:01:32.804 STDOUT terraform:  } 2025-09-19 00:01:32.804647 | orchestrator | 00:01:32.804 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-19 00:01:32.804724 | orchestrator | 00:01:32.804 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-19 00:01:32.804811 | orchestrator | 00:01:32.804 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.804861 | orchestrator | 00:01:32.804 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.804930 | orchestrator | 00:01:32.804 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.804965 | orchestrator | 00:01:32.804 STDOUT terraform:  + protocol = "tcp" 2025-09-19 00:01:32.805025 | orchestrator | 00:01:32.804 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.805084 | orchestrator | 00:01:32.805 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.805258 | orchestrator | 00:01:32.805 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.805315 | orchestrator | 00:01:32.805 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 00:01:32.805459 | orchestrator | 00:01:32.805 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.805564 | orchestrator | 00:01:32.805 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.805626 | orchestrator | 00:01:32.805 STDOUT terraform:  } 2025-09-19 00:01:32.805746 | orchestrator | 00:01:32.805 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-19 00:01:32.805878 | orchestrator | 00:01:32.805 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-19 00:01:32.805963 | orchestrator | 00:01:32.805 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.806137 | orchestrator | 00:01:32.806 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.806345 | orchestrator | 00:01:32.806 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.806463 | orchestrator | 00:01:32.806 STDOUT terraform:  + protocol = "udp" 2025-09-19 00:01:32.808354 | orchestrator | 00:01:32.806 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.808425 | orchestrator | 00:01:32.808 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.808449 | orchestrator | 00:01:32.808 STDOUT terraform:  + remot 2025-09-19 00:01:32.808578 | orchestrator | 00:01:32.808 STDOUT terraform: e_group_id = (known after apply) 2025-09-19 00:01:32.808669 | orchestrator | 00:01:32.808 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 00:01:32.808732 | orchestrator | 00:01:32.808 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.808832 | orchestrator | 00:01:32.808 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.808868 | orchestrator | 00:01:32.808 STDOUT terraform:  } 2025-09-19 00:01:32.808937 | orchestrator | 00:01:32.808 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-19 00:01:32.809130 | orchestrator | 00:01:32.808 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-19 00:01:32.809409 | orchestrator | 00:01:32.809 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.809607 | orchestrator | 00:01:32.809 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.809672 | orchestrator | 00:01:32.809 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.809740 | orchestrator | 00:01:32.809 STDOUT terraform:  + protocol = "icmp" 2025-09-19 00:01:32.809837 | orchestrator | 00:01:32.809 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.809919 | orchestrator | 00:01:32.809 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.810049 | orchestrator | 00:01:32.809 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.810168 | orchestrator | 00:01:32.810 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.810314 | orchestrator | 00:01:32.810 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.810386 | orchestrator | 00:01:32.810 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.810455 | orchestrator | 00:01:32.810 STDOUT terraform:  } 2025-09-19 00:01:32.810620 | orchestrator | 00:01:32.810 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-19 00:01:32.810692 | orchestrator | 00:01:32.810 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-19 00:01:32.810743 | orchestrator | 00:01:32.810 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.810808 | orchestrator | 00:01:32.810 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.810864 | orchestrator | 00:01:32.810 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.810921 | orchestrator | 00:01:32.810 STDOUT terraform:  + protocol = "tcp" 2025-09-19 00:01:32.810993 | orchestrator | 00:01:32.810 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.811056 | orchestrator | 00:01:32.811 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.811110 | orchestrator | 00:01:32.811 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.811155 | orchestrator | 00:01:32.811 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.811213 | orchestrator | 00:01:32.811 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.811259 | orchestrator | 00:01:32.811 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.811296 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-19 00:01:32.811368 | orchestrator | 00:01:32.811 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-19 00:01:32.811439 | orchestrator | 00:01:32.811 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-19 00:01:32.811478 | orchestrator | 00:01:32.811 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.811526 | orchestrator | 00:01:32.811 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.811571 | orchestrator | 00:01:32.811 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.811616 | orchestrator | 00:01:32.811 STDOUT terraform:  + protocol = "udp" 2025-09-19 00:01:32.811673 | orchestrator | 00:01:32.811 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.811716 | orchestrator | 00:01:32.811 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.811809 | orchestrator | 00:01:32.811 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.811864 | orchestrator | 00:01:32.811 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.811923 | orchestrator | 00:01:32.811 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.811967 | orchestrator | 00:01:32.811 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.812001 | orchestrator | 00:01:32.811 STDOUT terraform:  } 2025-09-19 00:01:32.812074 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-19 00:01:32.812149 | orchestrator | 00:01:32.812 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-19 00:01:32.812187 | orchestrator | 00:01:32.812 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.812234 | orchestrator | 00:01:32.812 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.812280 | orchestrator | 00:01:32.812 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.812326 | orchestrator | 00:01:32.812 STDOUT terraform:  + protocol = "icmp" 2025-09-19 00:01:32.812384 | orchestrator | 00:01:32.812 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.812426 | orchestrator | 00:01:32.812 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.812482 | orchestrator | 00:01:32.812 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.812528 | orchestrator | 00:01:32.812 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.812587 | orchestrator | 00:01:32.812 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.812659 | orchestrator | 00:01:32.812 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.812698 | orchestrator | 00:01:32.812 STDOUT terraform:  } 2025-09-19 00:01:32.814058 | orchestrator | 00:01:32.812 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-19 00:01:32.814174 | orchestrator | 00:01:32.814 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-19 00:01:32.814211 | orchestrator | 00:01:32.814 STDOUT terraform:  + description = "vrrp" 2025-09-19 00:01:32.814299 | orchestrator | 00:01:32.814 STDOUT terraform:  + direction = "ingress" 2025-09-19 00:01:32.814353 | orchestrator | 00:01:32.814 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 00:01:32.814363 | orchestrator | 00:01:32.814 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.814439 | orchestrator | 00:01:32.814 STDOUT terraform:  + protocol = "112" 2025-09-19 00:01:32.814493 | orchestrator | 00:01:32.814 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.814560 | orchestrator | 00:01:32.814 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 00:01:32.814615 | orchestrator | 00:01:32.814 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 00:01:32.814742 | orchestrator | 00:01:32.814 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 00:01:32.814840 | orchestrator | 00:01:32.814 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 00:01:32.814902 | orchestrator | 00:01:32.814 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.814928 | orchestrator | 00:01:32.814 STDOUT terraform:  } 2025-09-19 00:01:32.815013 | orchestrator | 00:01:32.814 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-19 00:01:32.815098 | orchestrator | 00:01:32.815 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-19 00:01:32.815146 | orchestrator | 00:01:32.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.815203 | orchestrator | 00:01:32.815 STDOUT terraform:  + description = "management security group" 2025-09-19 00:01:32.815254 | orchestrator | 00:01:32.815 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.815301 | orchestrator | 00:01:32.815 STDOUT terraform:  + name = "testbed-management" 2025-09-19 00:01:32.815349 | orchestrator | 00:01:32.815 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.815396 | orchestrator | 00:01:32.815 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 00:01:32.815441 | orchestrator | 00:01:32.815 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.815463 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-19 00:01:32.815540 | orchestrator | 00:01:32.815 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-19 00:01:32.815634 | orchestrator | 00:01:32.815 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-19 00:01:32.815680 | orchestrator | 00:01:32.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.815726 | orchestrator | 00:01:32.815 STDOUT terraform:  + description = "node security group" 2025-09-19 00:01:32.815812 | orchestrator | 00:01:32.815 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.815851 | orchestrator | 00:01:32.815 STDOUT terraform:  + name = "testbed-node" 2025-09-19 00:01:32.815901 | orchestrator | 00:01:32.815 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.815947 | orchestrator | 00:01:32.815 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 00:01:32.815991 | orchestrator | 00:01:32.815 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.816011 | orchestrator | 00:01:32.815 STDOUT terraform:  } 2025-09-19 00:01:32.816081 | orchestrator | 00:01:32.816 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-19 00:01:32.816154 | orchestrator | 00:01:32.816 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-19 00:01:32.816198 | orchestrator | 00:01:32.816 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 00:01:32.816258 | orchestrator | 00:01:32.816 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-19 00:01:32.816269 | orchestrator | 00:01:32.816 STDOUT terraform:  + dns_nameservers = [ 2025-09-19 00:01:32.816293 | orchestrator | 00:01:32.816 STDOUT terraform:  + "8.8.8.8", 2025-09-19 00:01:32.816312 | orchestrator | 00:01:32.816 STDOUT terraform:  + "9.9.9.9", 2025-09-19 00:01:32.816534 | orchestrator | 00:01:32.816 STDOUT terraform:  ] 2025-09-19 00:01:32.816550 | orchestrator | 00:01:32.816 STDOUT terraform:  + enable_dhcp = true 2025-09-19 00:01:32.816556 | orchestrator | 00:01:32.816 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-19 00:01:32.816562 | orchestrator | 00:01:32.816 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.816587 | orchestrator | 00:01:32.816 STDOUT terraform:  + ip_version = 4 2025-09-19 00:01:32.816595 | orchestrator | 00:01:32.816 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-19 00:01:32.816605 | orchestrator | 00:01:32.816 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-19 00:01:32.816610 | orchestrator | 00:01:32.816 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-19 00:01:32.816652 | orchestrator | 00:01:32.816 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 00:01:32.816682 | orchestrator | 00:01:32.816 STDOUT terraform:  + no_gateway = false 2025-09-19 00:01:32.816730 | orchestrator | 00:01:32.816 STDOUT terraform:  + region = (known after apply) 2025-09-19 00:01:32.816922 | orchestrator | 00:01:32.816 STDOUT terraform:  + service_types = (known after apply) 2025-09-19 00:01:32.816997 | orchestrator | 00:01:32.816 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 00:01:32.817011 | orchestrator | 00:01:32.816 STDOUT terraform:  + allocation_pool { 2025-09-19 00:01:32.817032 | orchestrator | 00:01:32.816 STDOUT terraform:  + end = "192.168.31.250" 2025-09-19 00:01:32.817044 | orchestrator | 00:01:32.816 STDOUT terraform:  + start = "192.168.31.200" 2025-09-19 00:01:32.817055 | orchestrator | 00:01:32.816 STDOUT terraform:  } 2025-09-19 00:01:32.817067 | orchestrator | 00:01:32.816 STDOUT terraform:  } 2025-09-19 00:01:32.817078 | orchestrator | 00:01:32.816 STDOUT terraform:  # terraform_data.image will be created 2025-09-19 00:01:32.817089 | orchestrator | 00:01:32.816 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-19 00:01:32.817104 | orchestrator | 00:01:32.817 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.817115 | orchestrator | 00:01:32.817 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 00:01:32.817126 | orchestrator | 00:01:32.817 STDOUT terraform:  + output = (known after apply) 2025-09-19 00:01:32.817141 | orchestrator | 00:01:32.817 STDOUT terraform:  } 2025-09-19 00:01:32.817173 | orchestrator | 00:01:32.817 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-19 00:01:32.817188 | orchestrator | 00:01:32.817 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-19 00:01:32.817225 | orchestrator | 00:01:32.817 STDOUT terraform:  + id = (known after apply) 2025-09-19 00:01:32.817242 | orchestrator | 00:01:32.817 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 00:01:32.817290 | orchestrator | 00:01:32.817 STDOUT terraform:  + output = (known after apply) 2025-09-19 00:01:32.817307 | orchestrator | 00:01:32.817 STDOUT terraform:  } 2025-09-19 00:01:32.817347 | orchestrator | 00:01:32.817 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-19 00:01:32.817384 | orchestrator | 00:01:32.817 STDOUT terraform: Changes to Outputs: 2025-09-19 00:01:32.817399 | orchestrator | 00:01:32.817 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-19 00:01:32.817434 | orchestrator | 00:01:32.817 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 00:01:32.995119 | orchestrator | 00:01:32.994 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-19 00:01:32.996699 | orchestrator | 00:01:32.996 STDOUT terraform: terraform_data.image: Creating... 2025-09-19 00:01:32.996991 | orchestrator | 00:01:32.996 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=c1e800be-cb93-b400-6a2f-ae860a2ee7f2] 2025-09-19 00:01:32.999021 | orchestrator | 00:01:32.998 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=b9771901-7d22-9f1b-51c5-f73a021e387e] 2025-09-19 00:01:33.022385 | orchestrator | 00:01:33.022 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-19 00:01:33.030311 | orchestrator | 00:01:33.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-19 00:01:33.031741 | orchestrator | 00:01:33.031 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-19 00:01:33.032388 | orchestrator | 00:01:33.032 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-19 00:01:33.032581 | orchestrator | 00:01:33.032 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-19 00:01:33.033722 | orchestrator | 00:01:33.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-19 00:01:33.035947 | orchestrator | 00:01:33.035 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-19 00:01:33.038644 | orchestrator | 00:01:33.038 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-19 00:01:33.043124 | orchestrator | 00:01:33.042 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-19 00:01:33.046695 | orchestrator | 00:01:33.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-19 00:01:33.529503 | orchestrator | 00:01:33.528 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 00:01:33.530962 | orchestrator | 00:01:33.530 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-19 00:01:33.536077 | orchestrator | 00:01:33.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-19 00:01:33.538132 | orchestrator | 00:01:33.537 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 00:01:33.540000 | orchestrator | 00:01:33.539 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-19 00:01:33.544017 | orchestrator | 00:01:33.543 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-19 00:01:34.038254 | orchestrator | 00:01:34.038 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=9eb6e1c5-2142-462f-b166-430bd414147a] 2025-09-19 00:01:34.046573 | orchestrator | 00:01:34.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-19 00:01:36.644869 | orchestrator | 00:01:36.644 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=253dac68-3781-42b7-8d02-e83cc46bb576] 2025-09-19 00:01:36.654230 | orchestrator | 00:01:36.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-19 00:01:36.672853 | orchestrator | 00:01:36.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ace41295-549a-4643-92eb-07daa5f39402] 2025-09-19 00:01:36.678848 | orchestrator | 00:01:36.678 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-19 00:01:36.700187 | orchestrator | 00:01:36.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=037340a3-0b4d-471e-9cf4-4052731628bd] 2025-09-19 00:01:36.712353 | orchestrator | 00:01:36.712 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=5c96df58-7556-4413-84d6-ffa963b8d5b4] 2025-09-19 00:01:36.717254 | orchestrator | 00:01:36.717 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-19 00:01:36.717872 | orchestrator | 00:01:36.717 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-19 00:01:36.721991 | orchestrator | 00:01:36.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=b274d452-dc05-477a-a838-600cb81e7cbe] 2025-09-19 00:01:36.722266 | orchestrator | 00:01:36.722 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=84c7f694bc1625c896090f156efab770aeec96c2] 2025-09-19 00:01:36.725396 | orchestrator | 00:01:36.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-19 00:01:36.726302 | orchestrator | 00:01:36.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-19 00:01:36.726636 | orchestrator | 00:01:36.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=7d2555f8-8f26-4f5e-8b79-cd121c4d405f] 2025-09-19 00:01:36.732412 | orchestrator | 00:01:36.732 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-19 00:01:36.744912 | orchestrator | 00:01:36.744 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=841391799df73c3bc8056cb85ad613710406fb73] 2025-09-19 00:01:36.751594 | orchestrator | 00:01:36.751 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-19 00:01:36.774966 | orchestrator | 00:01:36.774 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=7d861b66-423b-4a73-89d0-4a2393a19521] 2025-09-19 00:01:36.780083 | orchestrator | 00:01:36.779 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-19 00:01:36.790503 | orchestrator | 00:01:36.790 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=94fdce60-5769-46af-b883-c01ec9bbc4f3] 2025-09-19 00:01:36.934203 | orchestrator | 00:01:36.933 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=5095dff0-407e-4b8b-811f-a3c5cd55a16d] 2025-09-19 00:01:37.361791 | orchestrator | 00:01:37.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=60cbc511-46f7-41b8-8fa9-930abf7265d3] 2025-09-19 00:01:37.700632 | orchestrator | 00:01:37.700 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=73fcd435-d159-4dfa-b937-83b2ecd6eb0b] 2025-09-19 00:01:37.710765 | orchestrator | 00:01:37.710 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-19 00:01:40.060308 | orchestrator | 00:01:40.059 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=199ddf9d-b638-421d-a1bc-96e0d48590a2] 2025-09-19 00:01:40.063585 | orchestrator | 00:01:40.063 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=a889b39a-32f5-4a00-874e-3d7c73e2372c] 2025-09-19 00:01:40.090507 | orchestrator | 00:01:40.090 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=45faf48c-5427-4049-a3b0-222ba6087f49] 2025-09-19 00:01:40.132435 | orchestrator | 00:01:40.132 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=3adbf97e-ee72-4483-9697-646cf4299ea9] 2025-09-19 00:01:40.155786 | orchestrator | 00:01:40.155 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=a51a6fa4-0e4d-458a-998c-1ee2b13022e6] 2025-09-19 00:01:40.170210 | orchestrator | 00:01:40.169 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=55973005-cab9-4651-a089-f76828fe5b13] 2025-09-19 00:01:40.416466 | orchestrator | 00:01:40.416 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 2s [id=53038b5f-0bb2-49e1-8ea3-1f72990baca3] 2025-09-19 00:01:40.426822 | orchestrator | 00:01:40.423 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-19 00:01:40.426894 | orchestrator | 00:01:40.425 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-19 00:01:40.428519 | orchestrator | 00:01:40.428 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-19 00:01:40.615954 | orchestrator | 00:01:40.615 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=8f508161-26b0-45ec-83aa-28dbd0e3eb7c] 2025-09-19 00:01:40.630822 | orchestrator | 00:01:40.630 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-19 00:01:40.630912 | orchestrator | 00:01:40.630 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-19 00:01:40.631937 | orchestrator | 00:01:40.631 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-19 00:01:40.631984 | orchestrator | 00:01:40.631 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-19 00:01:40.632902 | orchestrator | 00:01:40.632 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-19 00:01:40.639827 | orchestrator | 00:01:40.639 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-19 00:01:40.641390 | orchestrator | 00:01:40.641 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=4e364070-51a8-42a5-b512-d9721178907f] 2025-09-19 00:01:40.644739 | orchestrator | 00:01:40.644 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-19 00:01:40.645532 | orchestrator | 00:01:40.645 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-19 00:01:40.646387 | orchestrator | 00:01:40.646 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-19 00:01:40.850450 | orchestrator | 00:01:40.850 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=91aacf2f-1703-4dcd-b52e-63acd4952e4a] 2025-09-19 00:01:40.856975 | orchestrator | 00:01:40.856 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-19 00:01:40.995034 | orchestrator | 00:01:40.994 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=49b61bc2-24e3-4a13-b1b6-0fc26abb20c9] 2025-09-19 00:01:41.011081 | orchestrator | 00:01:41.010 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-19 00:01:41.120189 | orchestrator | 00:01:41.119 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=a2d74b80-f615-49aa-8701-46033074f034] 2025-09-19 00:01:41.137129 | orchestrator | 00:01:41.136 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-19 00:01:41.291868 | orchestrator | 00:01:41.291 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=8185da05-04ff-4bb1-b9ab-1f4c6efef3e4] 2025-09-19 00:01:41.301909 | orchestrator | 00:01:41.301 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=eac84548-1c38-4caa-b8f5-0764f834ca6a] 2025-09-19 00:01:41.311612 | orchestrator | 00:01:41.310 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-19 00:01:41.324853 | orchestrator | 00:01:41.324 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-19 00:01:41.358061 | orchestrator | 00:01:41.357 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=cf24b7a1-8b3a-48e8-bdb9-9951ba2182df] 2025-09-19 00:01:41.370005 | orchestrator | 00:01:41.369 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-19 00:01:41.522996 | orchestrator | 00:01:41.522 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=0e143b35-52c1-4716-83ed-308c32af9169] 2025-09-19 00:01:41.538577 | orchestrator | 00:01:41.538 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-19 00:01:41.651067 | orchestrator | 00:01:41.650 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=4322ed86-63d3-460d-baf4-3c9e90a53e85] 2025-09-19 00:01:41.717982 | orchestrator | 00:01:41.717 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=d3cb2e73-2739-4d88-90c6-81be9d65251b] 2025-09-19 00:01:41.764604 | orchestrator | 00:01:41.764 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=646975c1-6acf-49ac-b1c1-c31995c1a6cd] 2025-09-19 00:01:41.886681 | orchestrator | 00:01:41.886 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=cf941cdd-4470-4f75-b4cf-cb766c70a508] 2025-09-19 00:01:41.910670 | orchestrator | 00:01:41.910 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=f5837a12-e1d2-41f4-9345-decbe46102e6] 2025-09-19 00:01:42.139393 | orchestrator | 00:01:42.139 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=20cf926e-fe55-4166-9ec3-5ba6d6ac75f8] 2025-09-19 00:01:42.211121 | orchestrator | 00:01:42.210 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=5ab2208b-cb81-4e83-8d1e-6b4b60556b0a] 2025-09-19 00:01:42.435422 | orchestrator | 00:01:42.435 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=7ea935c7-a173-4a21-bb48-7ba6761c6e4f] 2025-09-19 00:01:43.229531 | orchestrator | 00:01:43.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=7dd2c10b-00e9-4e0e-8082-e800cf48e6ab] 2025-09-19 00:01:43.596721 | orchestrator | 00:01:43.596 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=d857791b-fc61-44af-b898-caa6712a4ba6] 2025-09-19 00:01:43.622280 | orchestrator | 00:01:43.622 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-19 00:01:43.632167 | orchestrator | 00:01:43.631 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-19 00:01:43.634137 | orchestrator | 00:01:43.633 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-19 00:01:43.642583 | orchestrator | 00:01:43.642 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-19 00:01:43.642885 | orchestrator | 00:01:43.642 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-19 00:01:43.648106 | orchestrator | 00:01:43.647 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-19 00:01:43.655050 | orchestrator | 00:01:43.654 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-19 00:01:46.346724 | orchestrator | 00:01:46.346 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=6346ed76-3c6c-48b1-99e9-9b58efe80572] 2025-09-19 00:01:46.355405 | orchestrator | 00:01:46.354 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-19 00:01:46.362299 | orchestrator | 00:01:46.362 STDOUT terraform: local_file.inventory: Creating... 2025-09-19 00:01:46.362350 | orchestrator | 00:01:46.362 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-19 00:01:46.366330 | orchestrator | 00:01:46.366 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a4c34420c941307fb06ceccdbbef588c72ffe804] 2025-09-19 00:01:46.366966 | orchestrator | 00:01:46.366 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f1f8f3008ea2c8c0177f542be5947ef0394c735b] 2025-09-19 00:01:47.467833 | orchestrator | 00:01:47.467 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=6346ed76-3c6c-48b1-99e9-9b58efe80572] 2025-09-19 00:01:53.633393 | orchestrator | 00:01:53.633 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-19 00:01:53.641591 | orchestrator | 00:01:53.641 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-19 00:01:53.645916 | orchestrator | 00:01:53.645 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-19 00:01:53.647041 | orchestrator | 00:01:53.646 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-19 00:01:53.652704 | orchestrator | 00:01:53.652 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-19 00:01:53.655915 | orchestrator | 00:01:53.655 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-19 00:02:03.636306 | orchestrator | 00:02:03.635 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-19 00:02:03.642459 | orchestrator | 00:02:03.642 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-19 00:02:03.646834 | orchestrator | 00:02:03.646 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-19 00:02:03.647951 | orchestrator | 00:02:03.647 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-19 00:02:03.653314 | orchestrator | 00:02:03.653 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-19 00:02:03.656877 | orchestrator | 00:02:03.656 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-19 00:02:04.254964 | orchestrator | 00:02:04.254 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=fe5471c8-969d-4597-9b21-adc23d45221e] 2025-09-19 00:02:04.383278 | orchestrator | 00:02:04.382 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=e54cb877-1194-4a8a-94d6-54dc203dbc53] 2025-09-19 00:02:04.674941 | orchestrator | 00:02:04.674 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=22a3443d-67b5-4a71-a77b-10747e76db68] 2025-09-19 00:02:04.747819 | orchestrator | 00:02:04.747 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=1ced838a-4ea3-4d39-8901-dde30c8f7d0b] 2025-09-19 00:02:13.639128 | orchestrator | 00:02:13.638 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-19 00:02:13.647463 | orchestrator | 00:02:13.647 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-19 00:02:15.112611 | orchestrator | 00:02:15.112 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=5a78e924-666b-4156-a1ba-65f56abd4807] 2025-09-19 00:02:15.147298 | orchestrator | 00:02:15.147 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=9da80984-d6dc-40e0-8d1d-4f02b3a1f963] 2025-09-19 00:02:15.164928 | orchestrator | 00:02:15.164 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-19 00:02:15.165136 | orchestrator | 00:02:15.165 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-19 00:02:15.177075 | orchestrator | 00:02:15.176 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-19 00:02:15.184007 | orchestrator | 00:02:15.183 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3636698373118112706] 2025-09-19 00:02:15.185171 | orchestrator | 00:02:15.185 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-19 00:02:15.185906 | orchestrator | 00:02:15.185 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-19 00:02:15.186835 | orchestrator | 00:02:15.186 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-19 00:02:15.198193 | orchestrator | 00:02:15.198 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-19 00:02:15.201042 | orchestrator | 00:02:15.200 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-19 00:02:15.205343 | orchestrator | 00:02:15.205 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-19 00:02:15.210183 | orchestrator | 00:02:15.210 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-19 00:02:15.218504 | orchestrator | 00:02:15.218 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-19 00:02:18.559871 | orchestrator | 00:02:18.559 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=fe5471c8-969d-4597-9b21-adc23d45221e/b274d452-dc05-477a-a838-600cb81e7cbe] 2025-09-19 00:02:18.580246 | orchestrator | 00:02:18.579 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=1ced838a-4ea3-4d39-8901-dde30c8f7d0b/ace41295-549a-4643-92eb-07daa5f39402] 2025-09-19 00:02:18.591677 | orchestrator | 00:02:18.591 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=5a78e924-666b-4156-a1ba-65f56abd4807/253dac68-3781-42b7-8d02-e83cc46bb576] 2025-09-19 00:02:18.613645 | orchestrator | 00:02:18.613 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=fe5471c8-969d-4597-9b21-adc23d45221e/7d861b66-423b-4a73-89d0-4a2393a19521] 2025-09-19 00:02:18.632929 | orchestrator | 00:02:18.632 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5a78e924-666b-4156-a1ba-65f56abd4807/037340a3-0b4d-471e-9cf4-4052731628bd] 2025-09-19 00:02:18.647172 | orchestrator | 00:02:18.646 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=1ced838a-4ea3-4d39-8901-dde30c8f7d0b/7d2555f8-8f26-4f5e-8b79-cd121c4d405f] 2025-09-19 00:02:18.661837 | orchestrator | 00:02:18.661 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=fe5471c8-969d-4597-9b21-adc23d45221e/94fdce60-5769-46af-b883-c01ec9bbc4f3] 2025-09-19 00:02:24.743138 | orchestrator | 00:02:24.742 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=1ced838a-4ea3-4d39-8901-dde30c8f7d0b/5095dff0-407e-4b8b-811f-a3c5cd55a16d] 2025-09-19 00:02:24.753018 | orchestrator | 00:02:24.752 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=5a78e924-666b-4156-a1ba-65f56abd4807/5c96df58-7556-4413-84d6-ffa963b8d5b4] 2025-09-19 00:02:25.221954 | orchestrator | 00:02:25.221 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-19 00:02:35.222173 | orchestrator | 00:02:35.221 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-19 00:02:35.720841 | orchestrator | 00:02:35.720 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2b5dfa40-3f48-4bee-8f09-fde8cd78611b] 2025-09-19 00:02:35.737520 | orchestrator | 00:02:35.737 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-19 00:02:35.737597 | orchestrator | 00:02:35.737 STDOUT terraform: Outputs: 2025-09-19 00:02:35.737610 | orchestrator | 00:02:35.737 STDOUT terraform: manager_address = 2025-09-19 00:02:35.737620 | orchestrator | 00:02:35.737 STDOUT terraform: private_key = 2025-09-19 00:02:35.944734 | orchestrator | ok: Runtime: 0:01:09.486060 2025-09-19 00:02:35.979428 | 2025-09-19 00:02:35.979521 | TASK [Fetch manager address] 2025-09-19 00:02:36.404161 | orchestrator | ok 2025-09-19 00:02:36.411144 | 2025-09-19 00:02:36.411238 | TASK [Set manager_host address] 2025-09-19 00:02:36.490932 | orchestrator | ok 2025-09-19 00:02:36.501152 | 2025-09-19 00:02:36.501264 | LOOP [Update ansible collections] 2025-09-19 00:02:37.467578 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 00:02:37.468084 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 00:02:37.468184 | orchestrator | Starting galaxy collection install process 2025-09-19 00:02:37.468250 | orchestrator | Process install dependency map 2025-09-19 00:02:37.468306 | orchestrator | Starting collection install process 2025-09-19 00:02:37.468357 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-09-19 00:02:37.468414 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-09-19 00:02:37.468477 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-19 00:02:37.468575 | orchestrator | ok: Item: commons Runtime: 0:00:00.642677 2025-09-19 00:02:38.350470 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 00:02:38.350628 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 00:02:38.350711 | orchestrator | Starting galaxy collection install process 2025-09-19 00:02:38.350762 | orchestrator | Process install dependency map 2025-09-19 00:02:38.350807 | orchestrator | Starting collection install process 2025-09-19 00:02:38.350874 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-09-19 00:02:38.350917 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-09-19 00:02:38.350957 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-19 00:02:38.351017 | orchestrator | ok: Item: services Runtime: 0:00:00.623034 2025-09-19 00:02:38.369444 | 2025-09-19 00:02:38.369569 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 00:02:48.882508 | orchestrator | ok 2025-09-19 00:02:48.893892 | 2025-09-19 00:02:48.894014 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 00:03:48.934194 | orchestrator | ok 2025-09-19 00:03:48.944122 | 2025-09-19 00:03:48.944235 | TASK [Fetch manager ssh hostkey] 2025-09-19 00:03:50.516158 | orchestrator | Output suppressed because no_log was given 2025-09-19 00:03:50.532272 | 2025-09-19 00:03:50.532436 | TASK [Get ssh keypair from terraform environment] 2025-09-19 00:03:51.068863 | orchestrator | ok: Runtime: 0:00:00.009202 2025-09-19 00:03:51.086943 | 2025-09-19 00:03:51.087108 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 00:03:51.127450 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-19 00:03:51.137556 | 2025-09-19 00:03:51.137731 | TASK [Run manager part 0] 2025-09-19 00:03:52.039779 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 00:03:52.085134 | orchestrator | 2025-09-19 00:03:52.085182 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-19 00:03:52.085189 | orchestrator | 2025-09-19 00:03:52.085202 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-19 00:03:53.851392 | orchestrator | ok: [testbed-manager] 2025-09-19 00:03:53.851474 | orchestrator | 2025-09-19 00:03:53.851520 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 00:03:53.851540 | orchestrator | 2025-09-19 00:03:53.851559 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:03:55.746497 | orchestrator | ok: [testbed-manager] 2025-09-19 00:03:55.746555 | orchestrator | 2025-09-19 00:03:55.746563 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 00:03:56.401201 | orchestrator | ok: [testbed-manager] 2025-09-19 00:03:56.401340 | orchestrator | 2025-09-19 00:03:56.401352 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 00:03:56.448028 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.448076 | orchestrator | 2025-09-19 00:03:56.448085 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-19 00:03:56.473883 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.473930 | orchestrator | 2025-09-19 00:03:56.473938 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 00:03:56.499563 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.499631 | orchestrator | 2025-09-19 00:03:56.499641 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 00:03:56.529248 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.529300 | orchestrator | 2025-09-19 00:03:56.529308 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 00:03:56.553357 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.553416 | orchestrator | 2025-09-19 00:03:56.553425 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-19 00:03:56.584014 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.584064 | orchestrator | 2025-09-19 00:03:56.584072 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-19 00:03:56.614773 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:03:56.614825 | orchestrator | 2025-09-19 00:03:56.614835 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-19 00:03:57.316466 | orchestrator | changed: [testbed-manager] 2025-09-19 00:03:57.316518 | orchestrator | 2025-09-19 00:03:57.316524 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-19 00:06:46.580770 | orchestrator | changed: [testbed-manager] 2025-09-19 00:06:46.580873 | orchestrator | 2025-09-19 00:06:46.580892 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 00:08:06.905954 | orchestrator | changed: [testbed-manager] 2025-09-19 00:08:06.906120 | orchestrator | 2025-09-19 00:08:06.906157 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 00:08:35.175516 | orchestrator | changed: [testbed-manager] 2025-09-19 00:08:35.175679 | orchestrator | 2025-09-19 00:08:35.175706 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 00:08:45.856998 | orchestrator | changed: [testbed-manager] 2025-09-19 00:08:45.857097 | orchestrator | 2025-09-19 00:08:45.857112 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 00:08:45.907476 | orchestrator | ok: [testbed-manager] 2025-09-19 00:08:45.907564 | orchestrator | 2025-09-19 00:08:45.907580 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-19 00:08:46.692444 | orchestrator | ok: [testbed-manager] 2025-09-19 00:08:46.692548 | orchestrator | 2025-09-19 00:08:46.692567 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-19 00:08:47.442637 | orchestrator | changed: [testbed-manager] 2025-09-19 00:08:47.442710 | orchestrator | 2025-09-19 00:08:47.442726 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-19 00:08:53.783677 | orchestrator | changed: [testbed-manager] 2025-09-19 00:08:53.783785 | orchestrator | 2025-09-19 00:08:53.783826 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-19 00:08:59.816329 | orchestrator | changed: [testbed-manager] 2025-09-19 00:08:59.816385 | orchestrator | 2025-09-19 00:08:59.816399 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-19 00:09:02.450874 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:02.450968 | orchestrator | 2025-09-19 00:09:02.450980 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-19 00:09:04.205784 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:04.205843 | orchestrator | 2025-09-19 00:09:04.205851 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-19 00:09:05.335015 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 00:09:05.335060 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 00:09:05.335065 | orchestrator | 2025-09-19 00:09:05.335070 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-19 00:09:05.390103 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 00:09:05.390160 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 00:09:05.390170 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 00:09:05.390178 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 00:09:09.186994 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 00:09:09.187039 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 00:09:09.187045 | orchestrator | 2025-09-19 00:09:09.187051 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-19 00:09:09.765780 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:09.765867 | orchestrator | 2025-09-19 00:09:09.765883 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-19 00:09:30.357913 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-19 00:09:30.357957 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-19 00:09:30.357965 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-19 00:09:30.357971 | orchestrator | 2025-09-19 00:09:30.357977 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-19 00:09:32.701555 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-19 00:09:32.701719 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-19 00:09:32.701738 | orchestrator | 2025-09-19 00:09:32.701751 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-19 00:09:32.701763 | orchestrator | 2025-09-19 00:09:32.701775 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:09:34.073055 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:34.073148 | orchestrator | 2025-09-19 00:09:34.073420 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 00:09:34.117513 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:34.117636 | orchestrator | 2025-09-19 00:09:34.117661 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 00:09:34.187342 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:34.187434 | orchestrator | 2025-09-19 00:09:34.187450 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 00:09:34.983387 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:34.983476 | orchestrator | 2025-09-19 00:09:34.983492 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 00:09:35.747670 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:35.747776 | orchestrator | 2025-09-19 00:09:35.747801 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 00:09:37.092330 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-19 00:09:37.092371 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-19 00:09:37.092379 | orchestrator | 2025-09-19 00:09:37.092393 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 00:09:38.419469 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:38.419522 | orchestrator | 2025-09-19 00:09:38.419530 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 00:09:40.155509 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:09:40.155587 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-19 00:09:40.155620 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:09:40.155629 | orchestrator | 2025-09-19 00:09:40.155639 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 00:09:40.214378 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:40.214416 | orchestrator | 2025-09-19 00:09:40.214424 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 00:09:40.784001 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:40.784644 | orchestrator | 2025-09-19 00:09:40.784677 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 00:09:40.857101 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:40.857153 | orchestrator | 2025-09-19 00:09:40.857159 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 00:09:41.730628 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 00:09:41.730683 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:41.730692 | orchestrator | 2025-09-19 00:09:41.730700 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 00:09:41.771711 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:41.771783 | orchestrator | 2025-09-19 00:09:41.771798 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 00:09:41.809852 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:41.809904 | orchestrator | 2025-09-19 00:09:41.809913 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 00:09:41.838129 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:41.838174 | orchestrator | 2025-09-19 00:09:41.838181 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 00:09:41.876197 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:41.876355 | orchestrator | 2025-09-19 00:09:41.876421 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 00:09:42.600394 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:42.600541 | orchestrator | 2025-09-19 00:09:42.600548 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 00:09:42.600553 | orchestrator | 2025-09-19 00:09:42.600557 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:09:43.990333 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:43.990367 | orchestrator | 2025-09-19 00:09:43.990374 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-19 00:09:44.955131 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:44.955175 | orchestrator | 2025-09-19 00:09:44.955183 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:09:44.955190 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-19 00:09:44.955196 | orchestrator | 2025-09-19 00:09:45.402282 | orchestrator | ok: Runtime: 0:05:53.595522 2025-09-19 00:09:45.421890 | 2025-09-19 00:09:45.422118 | TASK [Point out that the log in on the manager is now possible] 2025-09-19 00:09:45.471227 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-19 00:09:45.481478 | 2025-09-19 00:09:45.481614 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 00:09:45.533492 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-19 00:09:45.544774 | 2025-09-19 00:09:45.544914 | TASK [Run manager part 1 + 2] 2025-09-19 00:09:46.398387 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 00:09:46.453567 | orchestrator | 2025-09-19 00:09:46.453671 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-19 00:09:46.453689 | orchestrator | 2025-09-19 00:09:46.453717 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:09:48.912060 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:48.912114 | orchestrator | 2025-09-19 00:09:48.912136 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 00:09:48.946242 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:48.946284 | orchestrator | 2025-09-19 00:09:48.946292 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 00:09:48.979172 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:48.979211 | orchestrator | 2025-09-19 00:09:48.979219 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 00:09:49.014396 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:49.014440 | orchestrator | 2025-09-19 00:09:49.014448 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 00:09:49.078072 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:49.078119 | orchestrator | 2025-09-19 00:09:49.078127 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 00:09:49.134071 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:49.134118 | orchestrator | 2025-09-19 00:09:49.134126 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 00:09:49.172959 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-19 00:09:49.173000 | orchestrator | 2025-09-19 00:09:49.173007 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 00:09:49.859073 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:49.859128 | orchestrator | 2025-09-19 00:09:49.859136 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 00:09:49.898768 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:09:49.898802 | orchestrator | 2025-09-19 00:09:49.898807 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 00:09:51.202831 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:51.202895 | orchestrator | 2025-09-19 00:09:51.202905 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 00:09:51.757963 | orchestrator | ok: [testbed-manager] 2025-09-19 00:09:51.758006 | orchestrator | 2025-09-19 00:09:51.758094 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 00:09:52.828368 | orchestrator | changed: [testbed-manager] 2025-09-19 00:09:52.828424 | orchestrator | 2025-09-19 00:09:52.828441 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 00:10:09.647447 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:09.647478 | orchestrator | 2025-09-19 00:10:09.647483 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 00:10:10.239325 | orchestrator | ok: [testbed-manager] 2025-09-19 00:10:10.239358 | orchestrator | 2025-09-19 00:10:10.239369 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 00:10:10.291042 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:10:10.291077 | orchestrator | 2025-09-19 00:10:10.291086 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-19 00:10:11.193263 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:11.193356 | orchestrator | 2025-09-19 00:10:11.193375 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-19 00:10:12.211570 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:12.211687 | orchestrator | 2025-09-19 00:10:12.211704 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-19 00:10:12.775627 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:12.775660 | orchestrator | 2025-09-19 00:10:12.775666 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-19 00:10:12.815977 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 00:10:12.816097 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 00:10:12.816114 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 00:10:12.816127 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 00:10:15.329026 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:15.329077 | orchestrator | 2025-09-19 00:10:15.329086 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-19 00:10:24.408663 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-19 00:10:24.408709 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-19 00:10:24.408717 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-19 00:10:24.408723 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-19 00:10:24.408732 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-19 00:10:24.408843 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-19 00:10:24.408851 | orchestrator | 2025-09-19 00:10:24.408857 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-19 00:10:25.451494 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:25.451576 | orchestrator | 2025-09-19 00:10:25.451612 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-19 00:10:25.496189 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:10:25.496251 | orchestrator | 2025-09-19 00:10:25.496261 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-19 00:10:28.617161 | orchestrator | changed: [testbed-manager] 2025-09-19 00:10:28.617217 | orchestrator | 2025-09-19 00:10:28.617226 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-19 00:10:28.655748 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:10:28.655826 | orchestrator | 2025-09-19 00:10:28.655843 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-19 00:12:08.288366 | orchestrator | changed: [testbed-manager] 2025-09-19 00:12:08.288462 | orchestrator | 2025-09-19 00:12:08.288479 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 00:12:09.456701 | orchestrator | ok: [testbed-manager] 2025-09-19 00:12:09.456741 | orchestrator | 2025-09-19 00:12:09.456750 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:12:09.456758 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-19 00:12:09.456764 | orchestrator | 2025-09-19 00:12:09.672981 | orchestrator | ok: Runtime: 0:02:23.703408 2025-09-19 00:12:09.691444 | 2025-09-19 00:12:09.691598 | TASK [Reboot manager] 2025-09-19 00:12:11.233480 | orchestrator | ok: Runtime: 0:00:00.944897 2025-09-19 00:12:11.251491 | 2025-09-19 00:12:11.251676 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 00:12:25.983421 | orchestrator | ok 2025-09-19 00:12:25.994907 | 2025-09-19 00:12:25.995071 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 00:13:26.049259 | orchestrator | ok 2025-09-19 00:13:26.059228 | 2025-09-19 00:13:26.059365 | TASK [Deploy manager + bootstrap nodes] 2025-09-19 00:13:28.679854 | orchestrator | 2025-09-19 00:13:28.680045 | orchestrator | # DEPLOY MANAGER 2025-09-19 00:13:28.680069 | orchestrator | 2025-09-19 00:13:28.680083 | orchestrator | + set -e 2025-09-19 00:13:28.680096 | orchestrator | + echo 2025-09-19 00:13:28.680110 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-19 00:13:28.680127 | orchestrator | + echo 2025-09-19 00:13:28.680177 | orchestrator | + cat /opt/manager-vars.sh 2025-09-19 00:13:28.683305 | orchestrator | export NUMBER_OF_NODES=6 2025-09-19 00:13:28.683344 | orchestrator | 2025-09-19 00:13:28.683357 | orchestrator | export CEPH_VERSION=reef 2025-09-19 00:13:28.683371 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-19 00:13:28.683384 | orchestrator | export MANAGER_VERSION=9.2.0 2025-09-19 00:13:28.683407 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-19 00:13:28.683418 | orchestrator | 2025-09-19 00:13:28.683437 | orchestrator | export ARA=false 2025-09-19 00:13:28.683448 | orchestrator | export DEPLOY_MODE=manager 2025-09-19 00:13:28.683466 | orchestrator | export TEMPEST=true 2025-09-19 00:13:28.683477 | orchestrator | export IS_ZUUL=true 2025-09-19 00:13:28.683488 | orchestrator | 2025-09-19 00:13:28.683506 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:13:28.683518 | orchestrator | export EXTERNAL_API=false 2025-09-19 00:13:28.683529 | orchestrator | 2025-09-19 00:13:28.683540 | orchestrator | export IMAGE_USER=ubuntu 2025-09-19 00:13:28.683554 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-19 00:13:28.683565 | orchestrator | 2025-09-19 00:13:28.683576 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-19 00:13:28.683594 | orchestrator | 2025-09-19 00:13:28.683605 | orchestrator | + echo 2025-09-19 00:13:28.683617 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 00:13:28.684219 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 00:13:28.684240 | orchestrator | ++ INTERACTIVE=false 2025-09-19 00:13:28.684252 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 00:13:28.684263 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 00:13:28.684366 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 00:13:28.684381 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 00:13:28.684396 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 00:13:28.684407 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 00:13:28.684417 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 00:13:28.684432 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 00:13:28.684443 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 00:13:28.684454 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 00:13:28.684465 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 00:13:28.684476 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 00:13:28.684496 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 00:13:28.684511 | orchestrator | ++ export ARA=false 2025-09-19 00:13:28.684522 | orchestrator | ++ ARA=false 2025-09-19 00:13:28.684533 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 00:13:28.684543 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 00:13:28.684554 | orchestrator | ++ export TEMPEST=true 2025-09-19 00:13:28.684564 | orchestrator | ++ TEMPEST=true 2025-09-19 00:13:28.684575 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 00:13:28.684585 | orchestrator | ++ IS_ZUUL=true 2025-09-19 00:13:28.684596 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:13:28.684607 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:13:28.684618 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 00:13:28.684650 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 00:13:28.684665 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 00:13:28.684676 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 00:13:28.684687 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 00:13:28.684698 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 00:13:28.684708 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 00:13:28.684719 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 00:13:28.684730 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-19 00:13:28.739004 | orchestrator | + docker version 2025-09-19 00:13:29.031298 | orchestrator | Client: Docker Engine - Community 2025-09-19 00:13:29.031413 | orchestrator | Version: 27.5.1 2025-09-19 00:13:29.031438 | orchestrator | API version: 1.47 2025-09-19 00:13:29.031460 | orchestrator | Go version: go1.22.11 2025-09-19 00:13:29.031479 | orchestrator | Git commit: 9f9e405 2025-09-19 00:13:29.031498 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 00:13:29.031518 | orchestrator | OS/Arch: linux/amd64 2025-09-19 00:13:29.031538 | orchestrator | Context: default 2025-09-19 00:13:29.031557 | orchestrator | 2025-09-19 00:13:29.031577 | orchestrator | Server: Docker Engine - Community 2025-09-19 00:13:29.031596 | orchestrator | Engine: 2025-09-19 00:13:29.031672 | orchestrator | Version: 27.5.1 2025-09-19 00:13:29.031697 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-19 00:13:29.031753 | orchestrator | Go version: go1.22.11 2025-09-19 00:13:29.031766 | orchestrator | Git commit: 4c9b3b0 2025-09-19 00:13:29.031777 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 00:13:29.031788 | orchestrator | OS/Arch: linux/amd64 2025-09-19 00:13:29.031799 | orchestrator | Experimental: false 2025-09-19 00:13:29.031809 | orchestrator | containerd: 2025-09-19 00:13:29.031820 | orchestrator | Version: 1.7.27 2025-09-19 00:13:29.031831 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-19 00:13:29.031843 | orchestrator | runc: 2025-09-19 00:13:29.031854 | orchestrator | Version: 1.2.5 2025-09-19 00:13:29.031865 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-19 00:13:29.031875 | orchestrator | docker-init: 2025-09-19 00:13:29.031892 | orchestrator | Version: 0.19.0 2025-09-19 00:13:29.031904 | orchestrator | GitCommit: de40ad0 2025-09-19 00:13:29.036443 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-19 00:13:29.046177 | orchestrator | + set -e 2025-09-19 00:13:29.046225 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 00:13:29.046246 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 00:13:29.046267 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 00:13:29.046281 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 00:13:29.046293 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 00:13:29.046306 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 00:13:29.046319 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 00:13:29.046337 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 00:13:29.046357 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 00:13:29.046376 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 00:13:29.046396 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 00:13:29.046415 | orchestrator | ++ export ARA=false 2025-09-19 00:13:29.046435 | orchestrator | ++ ARA=false 2025-09-19 00:13:29.046450 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 00:13:29.046471 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 00:13:29.046490 | orchestrator | ++ export TEMPEST=true 2025-09-19 00:13:29.046502 | orchestrator | ++ TEMPEST=true 2025-09-19 00:13:29.046515 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 00:13:29.046533 | orchestrator | ++ IS_ZUUL=true 2025-09-19 00:13:29.046553 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:13:29.046572 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:13:29.046591 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 00:13:29.046609 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 00:13:29.046649 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 00:13:29.046661 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 00:13:29.046672 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 00:13:29.046690 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 00:13:29.046701 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 00:13:29.046712 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 00:13:29.046723 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 00:13:29.046734 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 00:13:29.046745 | orchestrator | ++ INTERACTIVE=false 2025-09-19 00:13:29.046755 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 00:13:29.046771 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 00:13:29.046782 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 00:13:29.046793 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-09-19 00:13:29.051965 | orchestrator | + set -e 2025-09-19 00:13:29.052019 | orchestrator | + VERSION=9.2.0 2025-09-19 00:13:29.052035 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 00:13:29.061921 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 00:13:29.061991 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-19 00:13:29.066506 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-19 00:13:29.070279 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-09-19 00:13:29.079141 | orchestrator | /opt/configuration ~ 2025-09-19 00:13:29.079192 | orchestrator | + set -e 2025-09-19 00:13:29.079203 | orchestrator | + pushd /opt/configuration 2025-09-19 00:13:29.079213 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 00:13:29.084438 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 00:13:29.085804 | orchestrator | ++ deactivate nondestructive 2025-09-19 00:13:29.085822 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:29.085835 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:29.085874 | orchestrator | ++ hash -r 2025-09-19 00:13:29.085889 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:29.085899 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 00:13:29.085909 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 00:13:29.085919 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 00:13:29.086147 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 00:13:29.086163 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 00:13:29.086173 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 00:13:29.086183 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 00:13:29.086194 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 00:13:29.086204 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 00:13:29.086214 | orchestrator | ++ export PATH 2025-09-19 00:13:29.086334 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:29.086348 | orchestrator | ++ '[' -z '' ']' 2025-09-19 00:13:29.086358 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 00:13:29.086368 | orchestrator | ++ PS1='(venv) ' 2025-09-19 00:13:29.086377 | orchestrator | ++ export PS1 2025-09-19 00:13:29.086387 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 00:13:29.086396 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 00:13:29.086410 | orchestrator | ++ hash -r 2025-09-19 00:13:29.086420 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-09-19 00:13:30.170823 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-09-19 00:13:30.170931 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-09-19 00:13:30.171068 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-09-19 00:13:30.172613 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-09-19 00:13:30.173824 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-09-19 00:13:30.185037 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.0) 2025-09-19 00:13:30.186416 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-09-19 00:13:30.187440 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-09-19 00:13:30.188922 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-09-19 00:13:30.221370 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-09-19 00:13:30.222855 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-09-19 00:13:30.224620 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-09-19 00:13:30.225966 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-09-19 00:13:30.229977 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-09-19 00:13:30.436157 | orchestrator | ++ which gilt 2025-09-19 00:13:30.439989 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-09-19 00:13:30.440011 | orchestrator | + /opt/venv/bin/gilt overlay 2025-09-19 00:13:30.674807 | orchestrator | osism.cfg-generics: 2025-09-19 00:13:30.845261 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-09-19 00:13:30.845363 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-09-19 00:13:30.845391 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-09-19 00:13:30.845405 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-09-19 00:13:31.478921 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-09-19 00:13:31.491187 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-09-19 00:13:31.789879 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-09-19 00:13:31.844338 | orchestrator | ~ 2025-09-19 00:13:31.844440 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 00:13:31.844456 | orchestrator | + deactivate 2025-09-19 00:13:31.844469 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 00:13:31.844482 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 00:13:31.844493 | orchestrator | + export PATH 2025-09-19 00:13:31.844504 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 00:13:31.844515 | orchestrator | + '[' -n '' ']' 2025-09-19 00:13:31.844529 | orchestrator | + hash -r 2025-09-19 00:13:31.844540 | orchestrator | + '[' -n '' ']' 2025-09-19 00:13:31.844551 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 00:13:31.844562 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 00:13:31.844573 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 00:13:31.844584 | orchestrator | + unset -f deactivate 2025-09-19 00:13:31.844595 | orchestrator | + popd 2025-09-19 00:13:31.845133 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-09-19 00:13:31.845152 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-19 00:13:31.845823 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 00:13:31.908264 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 00:13:31.908361 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-19 00:13:31.908376 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-19 00:13:32.007915 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 00:13:32.008035 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 00:13:32.008051 | orchestrator | ++ deactivate nondestructive 2025-09-19 00:13:32.008062 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:32.008073 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:32.008084 | orchestrator | ++ hash -r 2025-09-19 00:13:32.008096 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:32.008107 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 00:13:32.008135 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 00:13:32.008157 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 00:13:32.008169 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 00:13:32.008180 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 00:13:32.008191 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 00:13:32.008202 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 00:13:32.008213 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 00:13:32.008226 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 00:13:32.008326 | orchestrator | ++ export PATH 2025-09-19 00:13:32.008342 | orchestrator | ++ '[' -n '' ']' 2025-09-19 00:13:32.008353 | orchestrator | ++ '[' -z '' ']' 2025-09-19 00:13:32.008364 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 00:13:32.008375 | orchestrator | ++ PS1='(venv) ' 2025-09-19 00:13:32.008387 | orchestrator | ++ export PS1 2025-09-19 00:13:32.008406 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 00:13:32.008422 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 00:13:32.008439 | orchestrator | ++ hash -r 2025-09-19 00:13:32.008460 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-19 00:13:33.153915 | orchestrator | 2025-09-19 00:13:33.154082 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-19 00:13:33.154101 | orchestrator | 2025-09-19 00:13:33.154113 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 00:13:33.751948 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:33.752058 | orchestrator | 2025-09-19 00:13:33.752075 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 00:13:34.718975 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:34.719055 | orchestrator | 2025-09-19 00:13:34.719064 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-19 00:13:34.719072 | orchestrator | 2025-09-19 00:13:34.719080 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:13:37.008620 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:37.008775 | orchestrator | 2025-09-19 00:13:37.008792 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-19 00:13:37.063843 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:37.063932 | orchestrator | 2025-09-19 00:13:37.063945 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-19 00:13:37.532596 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:37.532792 | orchestrator | 2025-09-19 00:13:37.532823 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-19 00:13:37.577913 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:13:37.577999 | orchestrator | 2025-09-19 00:13:37.578012 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 00:13:37.915850 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:37.915947 | orchestrator | 2025-09-19 00:13:37.915961 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-19 00:13:37.972035 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:13:37.972122 | orchestrator | 2025-09-19 00:13:37.972136 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-19 00:13:38.296939 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:38.297057 | orchestrator | 2025-09-19 00:13:38.297074 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-19 00:13:38.413226 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:13:38.413320 | orchestrator | 2025-09-19 00:13:38.413335 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-19 00:13:38.413347 | orchestrator | 2025-09-19 00:13:38.413359 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:13:40.171826 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:40.171922 | orchestrator | 2025-09-19 00:13:40.171938 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-19 00:13:40.281336 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-19 00:13:40.281432 | orchestrator | 2025-09-19 00:13:40.281453 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-19 00:13:40.336110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-19 00:13:40.336201 | orchestrator | 2025-09-19 00:13:40.336215 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-19 00:13:41.440060 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-19 00:13:41.440151 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-19 00:13:41.440165 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-19 00:13:41.440176 | orchestrator | 2025-09-19 00:13:41.440187 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-19 00:13:43.215831 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-19 00:13:43.215919 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-19 00:13:43.215930 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-19 00:13:43.215939 | orchestrator | 2025-09-19 00:13:43.215947 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-19 00:13:43.864195 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 00:13:43.864316 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:43.864335 | orchestrator | 2025-09-19 00:13:43.864349 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-19 00:13:44.531139 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 00:13:44.531240 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:44.531257 | orchestrator | 2025-09-19 00:13:44.531270 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-19 00:13:44.592882 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:13:44.592985 | orchestrator | 2025-09-19 00:13:44.593008 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-19 00:13:44.949908 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:44.950005 | orchestrator | 2025-09-19 00:13:44.950072 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-19 00:13:45.029955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-19 00:13:45.030110 | orchestrator | 2025-09-19 00:13:45.030128 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-19 00:13:46.137701 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:46.137827 | orchestrator | 2025-09-19 00:13:46.137853 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-19 00:13:46.925593 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:46.925732 | orchestrator | 2025-09-19 00:13:46.925749 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-19 00:13:58.014098 | orchestrator | changed: [testbed-manager] 2025-09-19 00:13:58.014176 | orchestrator | 2025-09-19 00:13:58.014209 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-19 00:13:58.064745 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:13:58.064828 | orchestrator | 2025-09-19 00:13:58.064839 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-19 00:13:58.064848 | orchestrator | 2025-09-19 00:13:58.064855 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:13:59.877337 | orchestrator | ok: [testbed-manager] 2025-09-19 00:13:59.877436 | orchestrator | 2025-09-19 00:13:59.877451 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-19 00:13:59.979445 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-19 00:13:59.979541 | orchestrator | 2025-09-19 00:13:59.979555 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-19 00:14:00.054249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 00:14:00.054340 | orchestrator | 2025-09-19 00:14:00.054355 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-19 00:14:02.598320 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:02.598423 | orchestrator | 2025-09-19 00:14:02.598440 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-19 00:14:02.648416 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:02.648509 | orchestrator | 2025-09-19 00:14:02.648524 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-19 00:14:02.770250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-19 00:14:02.770343 | orchestrator | 2025-09-19 00:14:02.770359 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-19 00:14:05.648797 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-19 00:14:05.648889 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-19 00:14:05.648899 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-19 00:14:05.648906 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-19 00:14:05.648912 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-19 00:14:05.648919 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-19 00:14:05.648925 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-19 00:14:05.648931 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-19 00:14:05.648938 | orchestrator | 2025-09-19 00:14:05.648947 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-19 00:14:06.299831 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:06.299930 | orchestrator | 2025-09-19 00:14:06.299945 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-19 00:14:06.948791 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:06.948911 | orchestrator | 2025-09-19 00:14:06.948938 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-19 00:14:07.028659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-19 00:14:07.028791 | orchestrator | 2025-09-19 00:14:07.028816 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-19 00:14:08.245777 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-19 00:14:08.245873 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-19 00:14:08.245891 | orchestrator | 2025-09-19 00:14:08.245907 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-19 00:14:08.875630 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:08.875766 | orchestrator | 2025-09-19 00:14:08.875780 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-19 00:14:08.940149 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:14:08.940244 | orchestrator | 2025-09-19 00:14:08.940258 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-19 00:14:09.046242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-19 00:14:09.046339 | orchestrator | 2025-09-19 00:14:09.046353 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-19 00:14:09.677350 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:09.677451 | orchestrator | 2025-09-19 00:14:09.677467 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-19 00:14:09.734441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-19 00:14:09.734514 | orchestrator | 2025-09-19 00:14:09.734521 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-19 00:14:11.111883 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 00:14:11.111977 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 00:14:11.111992 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:11.112006 | orchestrator | 2025-09-19 00:14:11.112047 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-19 00:14:11.731873 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:11.731960 | orchestrator | 2025-09-19 00:14:11.731970 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-19 00:14:11.780357 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:14:11.780441 | orchestrator | 2025-09-19 00:14:11.780454 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-19 00:14:11.874401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-19 00:14:11.874480 | orchestrator | 2025-09-19 00:14:11.874489 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-19 00:14:12.418981 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:12.419078 | orchestrator | 2025-09-19 00:14:12.419094 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-19 00:14:12.813331 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:12.813426 | orchestrator | 2025-09-19 00:14:12.813441 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-19 00:14:14.060538 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-19 00:14:14.060631 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-19 00:14:14.060645 | orchestrator | 2025-09-19 00:14:14.060656 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-19 00:14:14.724447 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:14.724543 | orchestrator | 2025-09-19 00:14:14.724558 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-19 00:14:15.141978 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:15.142124 | orchestrator | 2025-09-19 00:14:15.142142 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-19 00:14:15.502151 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:15.502250 | orchestrator | 2025-09-19 00:14:15.502265 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-19 00:14:15.549094 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:14:15.549181 | orchestrator | 2025-09-19 00:14:15.549196 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-19 00:14:15.620529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-19 00:14:15.620649 | orchestrator | 2025-09-19 00:14:15.620665 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-19 00:14:15.667191 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:15.667290 | orchestrator | 2025-09-19 00:14:15.667306 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-19 00:14:17.671640 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-19 00:14:17.671766 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-19 00:14:17.671783 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-19 00:14:17.671796 | orchestrator | 2025-09-19 00:14:17.671809 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-19 00:14:18.364970 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:18.365070 | orchestrator | 2025-09-19 00:14:18.365086 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-19 00:14:19.062521 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:19.062614 | orchestrator | 2025-09-19 00:14:19.062628 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-19 00:14:19.767171 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:19.767267 | orchestrator | 2025-09-19 00:14:19.767282 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-19 00:14:19.847835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-19 00:14:19.847928 | orchestrator | 2025-09-19 00:14:19.847942 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-19 00:14:19.893323 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:19.893415 | orchestrator | 2025-09-19 00:14:19.893430 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-19 00:14:20.613098 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-19 00:14:20.613209 | orchestrator | 2025-09-19 00:14:20.613229 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-19 00:14:20.694520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-19 00:14:20.694611 | orchestrator | 2025-09-19 00:14:20.694625 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-19 00:14:21.373623 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:21.373756 | orchestrator | 2025-09-19 00:14:21.373772 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-19 00:14:21.986205 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:21.986275 | orchestrator | 2025-09-19 00:14:21.986283 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-19 00:14:22.035755 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:14:22.035824 | orchestrator | 2025-09-19 00:14:22.035830 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-19 00:14:22.113450 | orchestrator | ok: [testbed-manager] 2025-09-19 00:14:22.113544 | orchestrator | 2025-09-19 00:14:22.113556 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-19 00:14:22.913352 | orchestrator | changed: [testbed-manager] 2025-09-19 00:14:22.913476 | orchestrator | 2025-09-19 00:14:22.913499 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-19 00:15:32.943455 | orchestrator | changed: [testbed-manager] 2025-09-19 00:15:32.943548 | orchestrator | 2025-09-19 00:15:32.943560 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-19 00:15:33.970382 | orchestrator | ok: [testbed-manager] 2025-09-19 00:15:33.970470 | orchestrator | 2025-09-19 00:15:33.970481 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-19 00:15:34.029823 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:15:34.029936 | orchestrator | 2025-09-19 00:15:34.029959 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-19 00:15:36.455961 | orchestrator | changed: [testbed-manager] 2025-09-19 00:15:36.456063 | orchestrator | 2025-09-19 00:15:36.456079 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-19 00:15:36.559621 | orchestrator | ok: [testbed-manager] 2025-09-19 00:15:36.559717 | orchestrator | 2025-09-19 00:15:36.559732 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 00:15:36.559744 | orchestrator | 2025-09-19 00:15:36.559754 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-19 00:15:36.616564 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:15:36.616658 | orchestrator | 2025-09-19 00:15:36.616673 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-19 00:16:36.670878 | orchestrator | Pausing for 60 seconds 2025-09-19 00:16:36.670991 | orchestrator | changed: [testbed-manager] 2025-09-19 00:16:36.671006 | orchestrator | 2025-09-19 00:16:36.671019 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-19 00:16:42.200274 | orchestrator | changed: [testbed-manager] 2025-09-19 00:16:42.200383 | orchestrator | 2025-09-19 00:16:42.200400 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-19 00:17:23.826448 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-19 00:17:23.826568 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-19 00:17:23.826581 | orchestrator | changed: [testbed-manager] 2025-09-19 00:17:23.826591 | orchestrator | 2025-09-19 00:17:23.826601 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-19 00:17:33.532482 | orchestrator | changed: [testbed-manager] 2025-09-19 00:17:33.532599 | orchestrator | 2025-09-19 00:17:33.532623 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-19 00:17:33.618206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-19 00:17:33.618302 | orchestrator | 2025-09-19 00:17:33.618317 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 00:17:33.618329 | orchestrator | 2025-09-19 00:17:33.618341 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-19 00:17:33.663604 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:17:33.663699 | orchestrator | 2025-09-19 00:17:33.663721 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:17:33.663742 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-19 00:17:33.663759 | orchestrator | 2025-09-19 00:17:33.772183 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 00:17:33.772275 | orchestrator | + deactivate 2025-09-19 00:17:33.772289 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 00:17:33.772301 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 00:17:33.772311 | orchestrator | + export PATH 2025-09-19 00:17:33.772321 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 00:17:33.772332 | orchestrator | + '[' -n '' ']' 2025-09-19 00:17:33.772342 | orchestrator | + hash -r 2025-09-19 00:17:33.772351 | orchestrator | + '[' -n '' ']' 2025-09-19 00:17:33.772361 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 00:17:33.772371 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 00:17:33.772381 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 00:17:33.772391 | orchestrator | + unset -f deactivate 2025-09-19 00:17:33.772401 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-19 00:17:33.779074 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 00:17:33.779109 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 00:17:33.779119 | orchestrator | + local max_attempts=60 2025-09-19 00:17:33.779130 | orchestrator | + local name=ceph-ansible 2025-09-19 00:17:33.779140 | orchestrator | + local attempt_num=1 2025-09-19 00:17:33.779412 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:17:33.805511 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:17:33.805589 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 00:17:33.805601 | orchestrator | + local max_attempts=60 2025-09-19 00:17:33.805611 | orchestrator | + local name=kolla-ansible 2025-09-19 00:17:33.805643 | orchestrator | + local attempt_num=1 2025-09-19 00:17:33.805776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 00:17:33.837885 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:17:33.837986 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 00:17:33.838007 | orchestrator | + local max_attempts=60 2025-09-19 00:17:33.838141 | orchestrator | + local name=osism-ansible 2025-09-19 00:17:33.838161 | orchestrator | + local attempt_num=1 2025-09-19 00:17:33.838271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 00:17:33.864347 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:17:33.864428 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 00:17:33.864441 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 00:17:34.564390 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-19 00:17:34.806150 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-19 00:17:34.806250 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806265 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806278 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-19 00:17:34.806293 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-19 00:17:34.806304 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806315 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806326 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-19 00:17:34.806337 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806348 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-19 00:17:34.806359 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806370 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-19 00:17:34.806381 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806392 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-19 00:17:34.806403 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.806445 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-19 00:17:34.815199 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 00:17:34.871668 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 00:17:34.871755 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-19 00:17:34.875587 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-19 00:17:47.104925 | orchestrator | 2025-09-19 00:17:47 | INFO  | Task 46c63a96-e6ba-48fa-ac19-dcec8f6f1f29 (resolvconf) was prepared for execution. 2025-09-19 00:17:47.105029 | orchestrator | 2025-09-19 00:17:47 | INFO  | It takes a moment until task 46c63a96-e6ba-48fa-ac19-dcec8f6f1f29 (resolvconf) has been started and output is visible here. 2025-09-19 00:18:01.714662 | orchestrator | 2025-09-19 00:18:01.714777 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-19 00:18:01.714857 | orchestrator | 2025-09-19 00:18:01.714871 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:18:01.714885 | orchestrator | Friday 19 September 2025 00:17:50 +0000 (0:00:00.152) 0:00:00.152 ****** 2025-09-19 00:18:01.714898 | orchestrator | ok: [testbed-manager] 2025-09-19 00:18:01.714912 | orchestrator | 2025-09-19 00:18:01.714926 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 00:18:01.714941 | orchestrator | Friday 19 September 2025 00:17:55 +0000 (0:00:04.820) 0:00:04.972 ****** 2025-09-19 00:18:01.714953 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:18:01.714967 | orchestrator | 2025-09-19 00:18:01.714980 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 00:18:01.714993 | orchestrator | Friday 19 September 2025 00:17:55 +0000 (0:00:00.065) 0:00:05.037 ****** 2025-09-19 00:18:01.715006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-19 00:18:01.715020 | orchestrator | 2025-09-19 00:18:01.715033 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 00:18:01.715046 | orchestrator | Friday 19 September 2025 00:17:55 +0000 (0:00:00.081) 0:00:05.119 ****** 2025-09-19 00:18:01.715060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 00:18:01.715073 | orchestrator | 2025-09-19 00:18:01.715087 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 00:18:01.715099 | orchestrator | Friday 19 September 2025 00:17:56 +0000 (0:00:00.070) 0:00:05.190 ****** 2025-09-19 00:18:01.715112 | orchestrator | ok: [testbed-manager] 2025-09-19 00:18:01.715125 | orchestrator | 2025-09-19 00:18:01.715138 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 00:18:01.715151 | orchestrator | Friday 19 September 2025 00:17:57 +0000 (0:00:01.059) 0:00:06.249 ****** 2025-09-19 00:18:01.715164 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:18:01.715176 | orchestrator | 2025-09-19 00:18:01.715192 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 00:18:01.715207 | orchestrator | Friday 19 September 2025 00:17:57 +0000 (0:00:00.069) 0:00:06.319 ****** 2025-09-19 00:18:01.715220 | orchestrator | ok: [testbed-manager] 2025-09-19 00:18:01.715233 | orchestrator | 2025-09-19 00:18:01.715245 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 00:18:01.715258 | orchestrator | Friday 19 September 2025 00:17:57 +0000 (0:00:00.490) 0:00:06.809 ****** 2025-09-19 00:18:01.715270 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:18:01.715282 | orchestrator | 2025-09-19 00:18:01.715295 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 00:18:01.715333 | orchestrator | Friday 19 September 2025 00:17:57 +0000 (0:00:00.086) 0:00:06.895 ****** 2025-09-19 00:18:01.715346 | orchestrator | changed: [testbed-manager] 2025-09-19 00:18:01.715358 | orchestrator | 2025-09-19 00:18:01.715370 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 00:18:01.715383 | orchestrator | Friday 19 September 2025 00:17:58 +0000 (0:00:00.493) 0:00:07.389 ****** 2025-09-19 00:18:01.715395 | orchestrator | changed: [testbed-manager] 2025-09-19 00:18:01.715407 | orchestrator | 2025-09-19 00:18:01.715419 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 00:18:01.715431 | orchestrator | Friday 19 September 2025 00:17:59 +0000 (0:00:01.072) 0:00:08.461 ****** 2025-09-19 00:18:01.715443 | orchestrator | ok: [testbed-manager] 2025-09-19 00:18:01.715456 | orchestrator | 2025-09-19 00:18:01.715468 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 00:18:01.715493 | orchestrator | Friday 19 September 2025 00:18:00 +0000 (0:00:00.970) 0:00:09.432 ****** 2025-09-19 00:18:01.715507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-19 00:18:01.715519 | orchestrator | 2025-09-19 00:18:01.715532 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 00:18:01.715544 | orchestrator | Friday 19 September 2025 00:18:00 +0000 (0:00:00.085) 0:00:09.518 ****** 2025-09-19 00:18:01.715556 | orchestrator | changed: [testbed-manager] 2025-09-19 00:18:01.715568 | orchestrator | 2025-09-19 00:18:01.715580 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:18:01.715593 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:18:01.715604 | orchestrator | 2025-09-19 00:18:01.715617 | orchestrator | 2025-09-19 00:18:01.715628 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:18:01.715638 | orchestrator | Friday 19 September 2025 00:18:01 +0000 (0:00:01.149) 0:00:10.667 ****** 2025-09-19 00:18:01.715648 | orchestrator | =============================================================================== 2025-09-19 00:18:01.715658 | orchestrator | Gathering Facts --------------------------------------------------------- 4.82s 2025-09-19 00:18:01.715669 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2025-09-19 00:18:01.715679 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-09-19 00:18:01.715689 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2025-09-19 00:18:01.715699 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-09-19 00:18:01.715709 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.49s 2025-09-19 00:18:01.715737 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-09-19 00:18:01.715749 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-09-19 00:18:01.715759 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-19 00:18:01.715769 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-19 00:18:01.715779 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-19 00:18:01.715807 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-09-19 00:18:01.715818 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-19 00:18:01.981926 | orchestrator | + osism apply sshconfig 2025-09-19 00:18:13.932512 | orchestrator | 2025-09-19 00:18:13 | INFO  | Task 6bfc5dc6-e644-47fe-9095-28d5120b0e42 (sshconfig) was prepared for execution. 2025-09-19 00:18:13.932620 | orchestrator | 2025-09-19 00:18:13 | INFO  | It takes a moment until task 6bfc5dc6-e644-47fe-9095-28d5120b0e42 (sshconfig) has been started and output is visible here. 2025-09-19 00:18:25.111339 | orchestrator | 2025-09-19 00:18:25.111449 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-19 00:18:25.111465 | orchestrator | 2025-09-19 00:18:25.111477 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-19 00:18:25.111489 | orchestrator | Friday 19 September 2025 00:18:17 +0000 (0:00:00.146) 0:00:00.146 ****** 2025-09-19 00:18:25.111501 | orchestrator | ok: [testbed-manager] 2025-09-19 00:18:25.111513 | orchestrator | 2025-09-19 00:18:25.111524 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-19 00:18:25.111536 | orchestrator | Friday 19 September 2025 00:18:18 +0000 (0:00:00.555) 0:00:00.701 ****** 2025-09-19 00:18:25.111547 | orchestrator | changed: [testbed-manager] 2025-09-19 00:18:25.111558 | orchestrator | 2025-09-19 00:18:25.111569 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-19 00:18:25.111580 | orchestrator | Friday 19 September 2025 00:18:18 +0000 (0:00:00.473) 0:00:01.175 ****** 2025-09-19 00:18:25.111591 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-19 00:18:25.111602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-19 00:18:25.111613 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-19 00:18:25.111624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-19 00:18:25.111634 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-19 00:18:25.111645 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-19 00:18:25.111656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-19 00:18:25.111667 | orchestrator | 2025-09-19 00:18:25.111677 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-19 00:18:25.111688 | orchestrator | Friday 19 September 2025 00:18:24 +0000 (0:00:05.474) 0:00:06.650 ****** 2025-09-19 00:18:25.111719 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:18:25.111730 | orchestrator | 2025-09-19 00:18:25.111741 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-19 00:18:25.111752 | orchestrator | Friday 19 September 2025 00:18:24 +0000 (0:00:00.073) 0:00:06.723 ****** 2025-09-19 00:18:25.111763 | orchestrator | changed: [testbed-manager] 2025-09-19 00:18:25.111774 | orchestrator | 2025-09-19 00:18:25.111841 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:18:25.111856 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:18:25.111868 | orchestrator | 2025-09-19 00:18:25.111879 | orchestrator | 2025-09-19 00:18:25.111914 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:18:25.111927 | orchestrator | Friday 19 September 2025 00:18:24 +0000 (0:00:00.595) 0:00:07.318 ****** 2025-09-19 00:18:25.111940 | orchestrator | =============================================================================== 2025-09-19 00:18:25.111964 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.47s 2025-09-19 00:18:25.111987 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-09-19 00:18:25.112000 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-09-19 00:18:25.112013 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2025-09-19 00:18:25.112025 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-19 00:18:25.388466 | orchestrator | + osism apply known-hosts 2025-09-19 00:18:37.347407 | orchestrator | 2025-09-19 00:18:37 | INFO  | Task 1be7bb5c-71e0-473d-a1d5-ad2ff1f29aa4 (known-hosts) was prepared for execution. 2025-09-19 00:18:37.347502 | orchestrator | 2025-09-19 00:18:37 | INFO  | It takes a moment until task 1be7bb5c-71e0-473d-a1d5-ad2ff1f29aa4 (known-hosts) has been started and output is visible here. 2025-09-19 00:18:53.633436 | orchestrator | 2025-09-19 00:18:53.633511 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-19 00:18:53.633517 | orchestrator | 2025-09-19 00:18:53.633522 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-19 00:18:53.633527 | orchestrator | Friday 19 September 2025 00:18:41 +0000 (0:00:00.157) 0:00:00.157 ****** 2025-09-19 00:18:53.633532 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 00:18:53.633537 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 00:18:53.633541 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 00:18:53.633545 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 00:18:53.633549 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 00:18:53.633552 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 00:18:53.633556 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 00:18:53.633560 | orchestrator | 2025-09-19 00:18:53.633564 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-19 00:18:53.633569 | orchestrator | Friday 19 September 2025 00:18:46 +0000 (0:00:05.824) 0:00:05.982 ****** 2025-09-19 00:18:53.633574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 00:18:53.633580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 00:18:53.633584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 00:18:53.633588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 00:18:53.633591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 00:18:53.633595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 00:18:53.633599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 00:18:53.633603 | orchestrator | 2025-09-19 00:18:53.633607 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:18:53.633610 | orchestrator | Friday 19 September 2025 00:18:47 +0000 (0:00:00.163) 0:00:06.145 ****** 2025-09-19 00:18:53.633614 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACQdp0Cbxu+6I3iF+865GLDF0m3YRIqlwGzcemShfka) 2025-09-19 00:18:53.633647 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/xFDaNiUDrwXXFdz7oqJDeTBlllWVsRRoyG6HyQI10PqQP7kcnIt0P6vz8DGeaHvCwxTn4DRP15cZsC4UhwdnbZA+LMiUNS2X7Wcc+Zv5HQP66dYs8nTIxD+FetNXOhU6RA5i+HquSTeNzEq/kAHu7bu5/fRLuhCwQWLcz4jKeQrLS9ZVDkidEl0anPswQ7lqhsTbvqqiK+Kbe+23Ahzx7wZ3aq/MLdZT/OpRwZWlYmHK7m6R3jtATNjN5iy51zLEYfJ5O9Oo7ZMnk7D7+oMvi08m7RsP+m6KbFDchAuhFEib0KLsUqO3X4OoioeCUnlS41kGr7tlTMqiLj+7ewu9oqcu9c8gGLwAZcnrc6f3CtbeH0eQLP214I5JUVJETA9G/ikASmLMz5MLqgQ1zw/gu8ieGJPtpfxEa0jpsd8UgOwnfXtHXQJ+A6RbkHDG5DwK9rt9CAgA6CmKuDgii7X/m8JnIIWrXaGeACy9JbKWpskVe9y3Ts3yXRZFZ8A3s0c=) 2025-09-19 00:18:53.633654 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnASbVFQFXfb+UIECf8rIEmhluqC9hz6ihdlN7+q/V9WF+POcsUL5uugjia8QROHpAGpHwLpxQBQ+XqUNE+8TE=) 2025-09-19 00:18:53.633672 | orchestrator | 2025-09-19 00:18:53.633676 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:18:53.633680 | orchestrator | Friday 19 September 2025 00:18:48 +0000 (0:00:01.185) 0:00:07.331 ****** 2025-09-19 00:18:53.633693 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJYXIDipRxw1nQv5g5gpl/T1sUkSUjxNe0cDySellWzBAwoxMAb7tBZs8sYAsZlPKQjMH+cxeCouKG3lYhgsHv2fqn9BWcs9rwvGmkdzCdvp7X1jOcwMkxfZDWIwgmq4+Q8JgDfpANp0CKNhcv3v4cRPkmngi4RAiWXOsB8H4dKbeglqGInhD2MmeJddIw6HQvRNVHPmm8FJv1lQfgmL7ht6WrwcvwjpG/q4ux2xbgw1bYC8cHNK9uUqJRTDYnE9LBVAuKFIVvRq1wFkRxLtqQRF6CgLyTPlHwewXjI0FsgYA1GRBYfe3FC95tG+C+tdfmE3MnTYzznizNL0UKTA5H9cUuukuWyVYFdG7ut9V/pXAaLwMeXJga7ZPSg6QdjACMieX0Zmt+OF+34H2zBTp0DRFK3a6K3qo2XGws8O3qTDrlXBt7Ch6zWYt2+uNroJdVsDGKdza0/+wZt+TPkRl1/mZUEX94m9W0lEvM4lqx9P2dCFts8WHJIqmhn93XfzM=) 2025-09-19 00:18:53.633697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNGvpYVVgiVwqivSIOECWgaROcLpYjBFqD5Mz85n+MkczuayVkIPDew+nLsBPZfnP7GsDPkQwjIwYy0ZP24MQ3I=) 2025-09-19 00:18:53.633701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrMMAufHj0477xZghF4wSTk+s2MaUhY0NcFdOOx6euD) 2025-09-19 00:18:53.633705 | orchestrator | 2025-09-19 00:18:53.633709 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:18:53.633713 | orchestrator | Friday 19 September 2025 00:18:49 +0000 (0:00:01.072) 0:00:08.404 ****** 2025-09-19 00:18:53.633717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkXdEXqkJ7HNPR+R+yOqFHbWPbLsgUmnRhmrPMv2IKkS3hMFfXKOH1yAp1Bgo7todcVVzSG76EHtLuHe1uDCiEwy++QQQrPcoNj8vIGjwvJ3TByVO4UjjVxXy9uCh7evPDivsI4REtbiI6/Ax8rTfD/+h7Z4RQ7CejikWIALCSJsdGnxVo0ZthklFhSE9d4ujOzRb1e2zxSL0wgyI0AJB6LZ5r/Mxx8MjE2wt35AWCX5QopoEXMPTsZdGfDbMkKeyF7d0unly/2LKWGkNRdTJXYg1AmRGU1EPJYZMhZ/M5YWKIrreijqXqUi8J9isUk24YEdk74hQOioT5u34ynpuiWhON7uF3r617MCl6oCub4xjVMVGNIqb5hUKtyKH810qmfUF9Fv28ScUCAQHsP+g7JFkHR+TKAV7nSkKYafqpEBdazZRl5ovR6JSq8WQf9/58RSIYYLzj6APdK5Zamf4+BVLya/ia7BAo/iCW5CKqznU1XeokbkoLvL8POSb5Eqk=) 2025-09-19 00:18:53.633721 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+9HzsTc9Lws6/9M2ukLxU7ARkKQMf3Fxg/oSUHtVjk8h9MYLVHz2sDaipfJY8+Kk6NjKXflXAXb01ZcM/veTg=) 2025-09-19 00:18:53.633725 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMf9E0XHaS9Zw9jSn0AdaqKt08h9orBCJ8K53V2bRVRJ) 2025-09-19 00:18:53.633729 | orchestrator | 2025-09-19 00:18:53.633733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:18:53.633737 | orchestrator | Friday 19 September 2025 00:18:50 +0000 (0:00:01.049) 0:00:09.453 ****** 2025-09-19 00:18:53.633741 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCO9aOPgpDIwOnp+3KRxAxLaurjLa9M21l1A8Gm4NSvuMvQHePkBJUAJDb7rnJqi9BY7Fo6pwzTIgDd0/zjDsyfiggsThSi7QQpT7QpeWiIiRZNP143qiTKeTphr28wBXbHhdCutJDxKdo5JsRNq/ZPuQJzMUC3AV1biEuPL1ZlVDzuKEtHKGxVQlRvsQVh3nT0bM1VP9JKBstGU9Ip86JtwAH5ePx/mhxRAcFqzgVNu9yAM2jkPS9p88rztOyVzr8bRry1yIffSckPQb4mtu9f6ivlabVZ2kMtlBxq2XgavNGcyq4o1ylSanxYWrbyTz7cTWeQcsO2OcisBxHCwy74Zj2g5ixrJqdngkktPLzHlWdaAZseJU4LIUuQuCF4UkjQt/ZPVUOQCrF82mbF0rpkHIJk3seoN+laPFuFjX8lIBu+qHrz00JPq5yFO2EcS776WSza3v8WG0Sw23KqMckaF0ickfeJ8GhPeKrQcUJb+4h8h1ldokfVn1O0Bf5UuOE=) 2025-09-19 00:18:53.633745 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIirMRWPUkx/et3ehw7IWhz8LI5p90FbZNLheGC+WgeebFiKYTLLXYD0TNPK9pryfoSUUDin1ykkw8nchIVWnok=) 2025-09-19 00:18:53.633748 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJiKs/Xls7jv/usrQVX4LxrtLJc5Br2ZXd776B5vqJpS) 2025-09-19 00:18:53.633752 | orchestrator | 2025-09-19 00:18:53.633758 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:18:53.633762 | orchestrator | Friday 19 September 2025 00:18:51 +0000 (0:00:01.068) 0:00:10.522 ****** 2025-09-19 00:18:53.633766 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUpJTPGbYYvJoSRx7kCrQWu4wc//hjESirNnsbLoJNT2WVvBTRXTZ+hKTSEbIt+QlzGhRbDerZ/PV7ZE+xP4OY=) 2025-09-19 00:18:53.633770 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGbfk4l0STPsDG8picTp6eWXAaBGf626XyqXYSgHATCK) 2025-09-19 00:18:53.633776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/m6+zLumRocxe24Aq9VKOwn+tdXWqVscBWgr358rPA/CquND8cQzth2ewlVhk3dx/C5jigO9FqjtmCqgyDv7XyIR9XLeqFpZK+mkY848WI1Dl4Me3GhTEO5I/12ShtsId8kOBGX7PhqyEvU4gZJBV1eRexAzZe8x1RIgi5qoeUHAan1+CqMyx+QROeUs41WmcNKTesFKlJVuL+lzENL0UR5PIwekNk3HbMC5FdzYq5hLC/GD7037DnzlzVkRIp+AzSmP33F7aHDAbi3UGAIcIyN2mZZub3fn1fVAGOC1RuuFENY9UfkV4eEk93Cy1ZTBCIw9tywEGYsIgJO2Z+uYCRv7v1zo4I8B4JGDAfNUhICGQ6pboGU69pzNk1HHMcrekI3kvVUKFtoS0IkfbgsteWynmHz7LXNMqYTe4KgjSak7CriMI3cJG570if5oCgXxAWjEr6X1JM8wziVfnR+BEGckb6XlTcoWStUdFOkaNvKI1E9CVPMhuvPGX5ww7zVc=) 2025-09-19 00:18:53.633812 | orchestrator | 2025-09-19 00:18:53.633817 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:18:53.633820 | orchestrator | Friday 19 September 2025 00:18:52 +0000 (0:00:01.092) 0:00:11.615 ****** 2025-09-19 00:18:53.633827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbIrbiMbl52rsUHYCz4o+GBd8fvEpeW2p3cdvUFtpkmcw79w3BgIeI1TT4DG7ocpS31PbD5hqM5cjH2/zlen8bZlLokWt4OqnsEJ0K4Ig1byJnG7rQpmodVeJKSGIP+LPQC2E+9riwS8syBQXO1b/BRq0wNRk8wMsPWWpwGrYhbB+GppTQXUbB+5QiTM9Y2aIM8ax6s84/vswHHr7DU4Q4qE1aSL0u+PLXd8oCDaaqRBsnqST4Mymuc+RFzfWzr/gwykwEkT4UjSnNm7Y7n7k6E6wtfsEmTs37IS9l9afTe2XqQQ0eTH/0Q3LMz05mmSxv5IVBrswu3HQQbko2o3AvqprGksIw3kUPrMnm53/BDPKNswe0/qL/F8sJQqN6G93kCbwyqr6sgcYRCXYCvn5sekMFTYB/SCqxJ4wpPMMh9yufj7JcZQtSPshxswKL/QwYbpHk7TT9i4bsVTXzXzsZeOg+syH+9kSqGdlVFdjU2XjNwMyimvHdnGNIMpTZP2s=) 2025-09-19 00:19:04.485118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNkbyd6nzxx1UMxtYMLwiLlHcH6LBxnrAfa5H03BwDyg595J5IbKCowqLbGh2+rupxVugW1K4Ah3RN7vdH4BhU=) 2025-09-19 00:19:04.485253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINDhsMBjd3NNn0bdqQyHK3q0a7IshcXTZ9DKnMRjKRg3) 2025-09-19 00:19:04.485274 | orchestrator | 2025-09-19 00:19:04.485287 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:04.485300 | orchestrator | Friday 19 September 2025 00:18:53 +0000 (0:00:01.044) 0:00:12.660 ****** 2025-09-19 00:19:04.485312 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD3jYznTh4zhpCKQM/nzQjCR01Io9oXllpW63zpcAJEWw5BzvbFahcNgjmVgVp+rIfNDF3k5riVMVXiQ9dF2U+U=) 2025-09-19 00:19:04.485326 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCu+PGcnsqv0NcKq2f87TYaAq1O6ggT0t0qC3exctxJPqMDCnIhMdiA5IJkqeqAqG1iIaP0anqpqF6V+d/L8T+pVrEYJMIt1N+2qqZ/7HoeYbGeP1Jo1Ru8cAjuoxW5XXYf982sd2avRHSx0cFn72L8QCDxdmHvkB+2CoipZq1MF+uwJhqlijsJC+KOUgJ1mHG21UHCcICG2m/Ybo5XLfLY+aF5Pc4oIQCVOgBYYaUm4Yeta6DvsIH3ZodVFvmc+Pqq4h6u3wlaUQjlRIx2iwWcEOhgTmfd1igB41P8tCNhwIVFi8IhM/aMMJzrZ7SOtMbp4PIYp+rOlLp8W9KKHl/d8X0MYM3KYn9xnVSYrzn5o1uBFw6WsARsRBWZLygVj++UqCrf6aUhFBpy3r8aqRRcgROOGViTFc0PyR31zfDUj3DVbQWokg5BPcOy3rebUZU6qT9QqvDTFZmZGKVpTuJFrstiO0idHeF/a/3FAU1hMWhaVIXGOfYOhFh1RelS1Qk=) 2025-09-19 00:19:04.485340 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAHTyx1I+i4L5vNr5w0acDjRCoVQ5Mi/5cwwpytUhbA5) 2025-09-19 00:19:04.485351 | orchestrator | 2025-09-19 00:19:04.485362 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-19 00:19:04.485399 | orchestrator | Friday 19 September 2025 00:18:54 +0000 (0:00:01.109) 0:00:13.769 ****** 2025-09-19 00:19:04.485412 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 00:19:04.485423 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 00:19:04.485433 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 00:19:04.485444 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 00:19:04.485455 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 00:19:04.485466 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 00:19:04.485477 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 00:19:04.485488 | orchestrator | 2025-09-19 00:19:04.485499 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-19 00:19:04.485511 | orchestrator | Friday 19 September 2025 00:19:00 +0000 (0:00:05.281) 0:00:19.050 ****** 2025-09-19 00:19:04.485523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 00:19:04.485536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 00:19:04.485547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 00:19:04.485558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 00:19:04.485569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 00:19:04.485579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 00:19:04.485590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 00:19:04.485601 | orchestrator | 2025-09-19 00:19:04.485612 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:04.485623 | orchestrator | Friday 19 September 2025 00:19:00 +0000 (0:00:00.168) 0:00:19.219 ****** 2025-09-19 00:19:04.485634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACQdp0Cbxu+6I3iF+865GLDF0m3YRIqlwGzcemShfka) 2025-09-19 00:19:04.485690 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/xFDaNiUDrwXXFdz7oqJDeTBlllWVsRRoyG6HyQI10PqQP7kcnIt0P6vz8DGeaHvCwxTn4DRP15cZsC4UhwdnbZA+LMiUNS2X7Wcc+Zv5HQP66dYs8nTIxD+FetNXOhU6RA5i+HquSTeNzEq/kAHu7bu5/fRLuhCwQWLcz4jKeQrLS9ZVDkidEl0anPswQ7lqhsTbvqqiK+Kbe+23Ahzx7wZ3aq/MLdZT/OpRwZWlYmHK7m6R3jtATNjN5iy51zLEYfJ5O9Oo7ZMnk7D7+oMvi08m7RsP+m6KbFDchAuhFEib0KLsUqO3X4OoioeCUnlS41kGr7tlTMqiLj+7ewu9oqcu9c8gGLwAZcnrc6f3CtbeH0eQLP214I5JUVJETA9G/ikASmLMz5MLqgQ1zw/gu8ieGJPtpfxEa0jpsd8UgOwnfXtHXQJ+A6RbkHDG5DwK9rt9CAgA6CmKuDgii7X/m8JnIIWrXaGeACy9JbKWpskVe9y3Ts3yXRZFZ8A3s0c=) 2025-09-19 00:19:04.485705 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnASbVFQFXfb+UIECf8rIEmhluqC9hz6ihdlN7+q/V9WF+POcsUL5uugjia8QROHpAGpHwLpxQBQ+XqUNE+8TE=) 2025-09-19 00:19:04.485720 | orchestrator | 2025-09-19 00:19:04.485733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:04.485746 | orchestrator | Friday 19 September 2025 00:19:01 +0000 (0:00:01.060) 0:00:20.279 ****** 2025-09-19 00:19:04.485768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNGvpYVVgiVwqivSIOECWgaROcLpYjBFqD5Mz85n+MkczuayVkIPDew+nLsBPZfnP7GsDPkQwjIwYy0ZP24MQ3I=) 2025-09-19 00:19:04.485821 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJYXIDipRxw1nQv5g5gpl/T1sUkSUjxNe0cDySellWzBAwoxMAb7tBZs8sYAsZlPKQjMH+cxeCouKG3lYhgsHv2fqn9BWcs9rwvGmkdzCdvp7X1jOcwMkxfZDWIwgmq4+Q8JgDfpANp0CKNhcv3v4cRPkmngi4RAiWXOsB8H4dKbeglqGInhD2MmeJddIw6HQvRNVHPmm8FJv1lQfgmL7ht6WrwcvwjpG/q4ux2xbgw1bYC8cHNK9uUqJRTDYnE9LBVAuKFIVvRq1wFkRxLtqQRF6CgLyTPlHwewXjI0FsgYA1GRBYfe3FC95tG+C+tdfmE3MnTYzznizNL0UKTA5H9cUuukuWyVYFdG7ut9V/pXAaLwMeXJga7ZPSg6QdjACMieX0Zmt+OF+34H2zBTp0DRFK3a6K3qo2XGws8O3qTDrlXBt7Ch6zWYt2+uNroJdVsDGKdza0/+wZt+TPkRl1/mZUEX94m9W0lEvM4lqx9P2dCFts8WHJIqmhn93XfzM=) 2025-09-19 00:19:04.485836 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrMMAufHj0477xZghF4wSTk+s2MaUhY0NcFdOOx6euD) 2025-09-19 00:19:04.485849 | orchestrator | 2025-09-19 00:19:04.485860 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:04.485871 | orchestrator | Friday 19 September 2025 00:19:02 +0000 (0:00:01.047) 0:00:21.326 ****** 2025-09-19 00:19:04.485882 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+9HzsTc9Lws6/9M2ukLxU7ARkKQMf3Fxg/oSUHtVjk8h9MYLVHz2sDaipfJY8+Kk6NjKXflXAXb01ZcM/veTg=) 2025-09-19 00:19:04.485893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMf9E0XHaS9Zw9jSn0AdaqKt08h9orBCJ8K53V2bRVRJ) 2025-09-19 00:19:04.485905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkXdEXqkJ7HNPR+R+yOqFHbWPbLsgUmnRhmrPMv2IKkS3hMFfXKOH1yAp1Bgo7todcVVzSG76EHtLuHe1uDCiEwy++QQQrPcoNj8vIGjwvJ3TByVO4UjjVxXy9uCh7evPDivsI4REtbiI6/Ax8rTfD/+h7Z4RQ7CejikWIALCSJsdGnxVo0ZthklFhSE9d4ujOzRb1e2zxSL0wgyI0AJB6LZ5r/Mxx8MjE2wt35AWCX5QopoEXMPTsZdGfDbMkKeyF7d0unly/2LKWGkNRdTJXYg1AmRGU1EPJYZMhZ/M5YWKIrreijqXqUi8J9isUk24YEdk74hQOioT5u34ynpuiWhON7uF3r617MCl6oCub4xjVMVGNIqb5hUKtyKH810qmfUF9Fv28ScUCAQHsP+g7JFkHR+TKAV7nSkKYafqpEBdazZRl5ovR6JSq8WQf9/58RSIYYLzj6APdK5Zamf4+BVLya/ia7BAo/iCW5CKqznU1XeokbkoLvL8POSb5Eqk=) 2025-09-19 00:19:04.485916 | orchestrator | 2025-09-19 00:19:04.485927 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:04.485938 | orchestrator | Friday 19 September 2025 00:19:03 +0000 (0:00:01.080) 0:00:22.407 ****** 2025-09-19 00:19:04.485949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJiKs/Xls7jv/usrQVX4LxrtLJc5Br2ZXd776B5vqJpS) 2025-09-19 00:19:04.485960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCO9aOPgpDIwOnp+3KRxAxLaurjLa9M21l1A8Gm4NSvuMvQHePkBJUAJDb7rnJqi9BY7Fo6pwzTIgDd0/zjDsyfiggsThSi7QQpT7QpeWiIiRZNP143qiTKeTphr28wBXbHhdCutJDxKdo5JsRNq/ZPuQJzMUC3AV1biEuPL1ZlVDzuKEtHKGxVQlRvsQVh3nT0bM1VP9JKBstGU9Ip86JtwAH5ePx/mhxRAcFqzgVNu9yAM2jkPS9p88rztOyVzr8bRry1yIffSckPQb4mtu9f6ivlabVZ2kMtlBxq2XgavNGcyq4o1ylSanxYWrbyTz7cTWeQcsO2OcisBxHCwy74Zj2g5ixrJqdngkktPLzHlWdaAZseJU4LIUuQuCF4UkjQt/ZPVUOQCrF82mbF0rpkHIJk3seoN+laPFuFjX8lIBu+qHrz00JPq5yFO2EcS776WSza3v8WG0Sw23KqMckaF0ickfeJ8GhPeKrQcUJb+4h8h1ldokfVn1O0Bf5UuOE=) 2025-09-19 00:19:04.485982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIirMRWPUkx/et3ehw7IWhz8LI5p90FbZNLheGC+WgeebFiKYTLLXYD0TNPK9pryfoSUUDin1ykkw8nchIVWnok=) 2025-09-19 00:19:08.689267 | orchestrator | 2025-09-19 00:19:08.689393 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:08.689423 | orchestrator | Friday 19 September 2025 00:19:04 +0000 (0:00:01.099) 0:00:23.506 ****** 2025-09-19 00:19:08.689446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/m6+zLumRocxe24Aq9VKOwn+tdXWqVscBWgr358rPA/CquND8cQzth2ewlVhk3dx/C5jigO9FqjtmCqgyDv7XyIR9XLeqFpZK+mkY848WI1Dl4Me3GhTEO5I/12ShtsId8kOBGX7PhqyEvU4gZJBV1eRexAzZe8x1RIgi5qoeUHAan1+CqMyx+QROeUs41WmcNKTesFKlJVuL+lzENL0UR5PIwekNk3HbMC5FdzYq5hLC/GD7037DnzlzVkRIp+AzSmP33F7aHDAbi3UGAIcIyN2mZZub3fn1fVAGOC1RuuFENY9UfkV4eEk93Cy1ZTBCIw9tywEGYsIgJO2Z+uYCRv7v1zo4I8B4JGDAfNUhICGQ6pboGU69pzNk1HHMcrekI3kvVUKFtoS0IkfbgsteWynmHz7LXNMqYTe4KgjSak7CriMI3cJG570if5oCgXxAWjEr6X1JM8wziVfnR+BEGckb6XlTcoWStUdFOkaNvKI1E9CVPMhuvPGX5ww7zVc=) 2025-09-19 00:19:08.689503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUpJTPGbYYvJoSRx7kCrQWu4wc//hjESirNnsbLoJNT2WVvBTRXTZ+hKTSEbIt+QlzGhRbDerZ/PV7ZE+xP4OY=) 2025-09-19 00:19:08.689527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGbfk4l0STPsDG8picTp6eWXAaBGf626XyqXYSgHATCK) 2025-09-19 00:19:08.689550 | orchestrator | 2025-09-19 00:19:08.689569 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:08.689589 | orchestrator | Friday 19 September 2025 00:19:05 +0000 (0:00:01.083) 0:00:24.589 ****** 2025-09-19 00:19:08.689607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINDhsMBjd3NNn0bdqQyHK3q0a7IshcXTZ9DKnMRjKRg3) 2025-09-19 00:19:08.689628 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbIrbiMbl52rsUHYCz4o+GBd8fvEpeW2p3cdvUFtpkmcw79w3BgIeI1TT4DG7ocpS31PbD5hqM5cjH2/zlen8bZlLokWt4OqnsEJ0K4Ig1byJnG7rQpmodVeJKSGIP+LPQC2E+9riwS8syBQXO1b/BRq0wNRk8wMsPWWpwGrYhbB+GppTQXUbB+5QiTM9Y2aIM8ax6s84/vswHHr7DU4Q4qE1aSL0u+PLXd8oCDaaqRBsnqST4Mymuc+RFzfWzr/gwykwEkT4UjSnNm7Y7n7k6E6wtfsEmTs37IS9l9afTe2XqQQ0eTH/0Q3LMz05mmSxv5IVBrswu3HQQbko2o3AvqprGksIw3kUPrMnm53/BDPKNswe0/qL/F8sJQqN6G93kCbwyqr6sgcYRCXYCvn5sekMFTYB/SCqxJ4wpPMMh9yufj7JcZQtSPshxswKL/QwYbpHk7TT9i4bsVTXzXzsZeOg+syH+9kSqGdlVFdjU2XjNwMyimvHdnGNIMpTZP2s=) 2025-09-19 00:19:08.689680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNkbyd6nzxx1UMxtYMLwiLlHcH6LBxnrAfa5H03BwDyg595J5IbKCowqLbGh2+rupxVugW1K4Ah3RN7vdH4BhU=) 2025-09-19 00:19:08.689702 | orchestrator | 2025-09-19 00:19:08.689723 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 00:19:08.689743 | orchestrator | Friday 19 September 2025 00:19:06 +0000 (0:00:01.097) 0:00:25.687 ****** 2025-09-19 00:19:08.689763 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCu+PGcnsqv0NcKq2f87TYaAq1O6ggT0t0qC3exctxJPqMDCnIhMdiA5IJkqeqAqG1iIaP0anqpqF6V+d/L8T+pVrEYJMIt1N+2qqZ/7HoeYbGeP1Jo1Ru8cAjuoxW5XXYf982sd2avRHSx0cFn72L8QCDxdmHvkB+2CoipZq1MF+uwJhqlijsJC+KOUgJ1mHG21UHCcICG2m/Ybo5XLfLY+aF5Pc4oIQCVOgBYYaUm4Yeta6DvsIH3ZodVFvmc+Pqq4h6u3wlaUQjlRIx2iwWcEOhgTmfd1igB41P8tCNhwIVFi8IhM/aMMJzrZ7SOtMbp4PIYp+rOlLp8W9KKHl/d8X0MYM3KYn9xnVSYrzn5o1uBFw6WsARsRBWZLygVj++UqCrf6aUhFBpy3r8aqRRcgROOGViTFc0PyR31zfDUj3DVbQWokg5BPcOy3rebUZU6qT9QqvDTFZmZGKVpTuJFrstiO0idHeF/a/3FAU1hMWhaVIXGOfYOhFh1RelS1Qk=) 2025-09-19 00:19:08.689813 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD3jYznTh4zhpCKQM/nzQjCR01Io9oXllpW63zpcAJEWw5BzvbFahcNgjmVgVp+rIfNDF3k5riVMVXiQ9dF2U+U=) 2025-09-19 00:19:08.689836 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAHTyx1I+i4L5vNr5w0acDjRCoVQ5Mi/5cwwpytUhbA5) 2025-09-19 00:19:08.689856 | orchestrator | 2025-09-19 00:19:08.689876 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-19 00:19:08.689895 | orchestrator | Friday 19 September 2025 00:19:07 +0000 (0:00:01.016) 0:00:26.703 ****** 2025-09-19 00:19:08.689916 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 00:19:08.689937 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 00:19:08.689958 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 00:19:08.689994 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 00:19:08.690083 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 00:19:08.690109 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 00:19:08.690122 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 00:19:08.690136 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:19:08.690149 | orchestrator | 2025-09-19 00:19:08.690182 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-19 00:19:08.690195 | orchestrator | Friday 19 September 2025 00:19:07 +0000 (0:00:00.165) 0:00:26.868 ****** 2025-09-19 00:19:08.690206 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:19:08.690217 | orchestrator | 2025-09-19 00:19:08.690228 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-19 00:19:08.690247 | orchestrator | Friday 19 September 2025 00:19:07 +0000 (0:00:00.058) 0:00:26.926 ****** 2025-09-19 00:19:08.690258 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:19:08.690269 | orchestrator | 2025-09-19 00:19:08.690280 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-19 00:19:08.690290 | orchestrator | Friday 19 September 2025 00:19:07 +0000 (0:00:00.062) 0:00:26.989 ****** 2025-09-19 00:19:08.690301 | orchestrator | changed: [testbed-manager] 2025-09-19 00:19:08.690312 | orchestrator | 2025-09-19 00:19:08.690323 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:19:08.690334 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:19:08.690346 | orchestrator | 2025-09-19 00:19:08.690357 | orchestrator | 2025-09-19 00:19:08.690367 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:19:08.690378 | orchestrator | Friday 19 September 2025 00:19:08 +0000 (0:00:00.480) 0:00:27.469 ****** 2025-09-19 00:19:08.690389 | orchestrator | =============================================================================== 2025-09-19 00:19:08.690400 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.82s 2025-09-19 00:19:08.690411 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2025-09-19 00:19:08.690423 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-09-19 00:19:08.690441 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-19 00:19:08.690459 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-19 00:19:08.690477 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-19 00:19:08.690494 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-19 00:19:08.690513 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 00:19:08.690532 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 00:19:08.690550 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 00:19:08.690568 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 00:19:08.690586 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 00:19:08.690605 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 00:19:08.690624 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 00:19:08.690642 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-19 00:19:08.690661 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-19 00:19:08.690679 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-09-19 00:19:08.690698 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-19 00:19:08.690728 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-19 00:19:08.690747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-19 00:19:08.960533 | orchestrator | + osism apply squid 2025-09-19 00:19:20.923434 | orchestrator | 2025-09-19 00:19:20 | INFO  | Task b145f410-f95d-4434-95fc-d017343f944a (squid) was prepared for execution. 2025-09-19 00:19:20.923550 | orchestrator | 2025-09-19 00:19:20 | INFO  | It takes a moment until task b145f410-f95d-4434-95fc-d017343f944a (squid) has been started and output is visible here. 2025-09-19 00:21:16.243107 | orchestrator | 2025-09-19 00:21:16.243280 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-19 00:21:16.243307 | orchestrator | 2025-09-19 00:21:16.243328 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-19 00:21:16.243348 | orchestrator | Friday 19 September 2025 00:19:24 +0000 (0:00:00.162) 0:00:00.162 ****** 2025-09-19 00:21:16.243368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 00:21:16.243391 | orchestrator | 2025-09-19 00:21:16.243412 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-19 00:21:16.243433 | orchestrator | Friday 19 September 2025 00:19:24 +0000 (0:00:00.082) 0:00:00.245 ****** 2025-09-19 00:21:16.243455 | orchestrator | ok: [testbed-manager] 2025-09-19 00:21:16.243478 | orchestrator | 2025-09-19 00:21:16.243500 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-19 00:21:16.243521 | orchestrator | Friday 19 September 2025 00:19:26 +0000 (0:00:01.444) 0:00:01.689 ****** 2025-09-19 00:21:16.243544 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-19 00:21:16.243566 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-19 00:21:16.243588 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-19 00:21:16.243609 | orchestrator | 2025-09-19 00:21:16.243632 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-19 00:21:16.243655 | orchestrator | Friday 19 September 2025 00:19:27 +0000 (0:00:01.127) 0:00:02.817 ****** 2025-09-19 00:21:16.243677 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-19 00:21:16.243699 | orchestrator | 2025-09-19 00:21:16.243721 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-19 00:21:16.243771 | orchestrator | Friday 19 September 2025 00:19:28 +0000 (0:00:01.087) 0:00:03.905 ****** 2025-09-19 00:21:16.243792 | orchestrator | ok: [testbed-manager] 2025-09-19 00:21:16.243812 | orchestrator | 2025-09-19 00:21:16.243829 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-19 00:21:16.243847 | orchestrator | Friday 19 September 2025 00:19:28 +0000 (0:00:00.394) 0:00:04.299 ****** 2025-09-19 00:21:16.243866 | orchestrator | changed: [testbed-manager] 2025-09-19 00:21:16.243883 | orchestrator | 2025-09-19 00:21:16.243902 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-19 00:21:16.243946 | orchestrator | Friday 19 September 2025 00:19:29 +0000 (0:00:00.910) 0:00:05.209 ****** 2025-09-19 00:21:16.243966 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-19 00:21:16.243985 | orchestrator | ok: [testbed-manager] 2025-09-19 00:21:16.244002 | orchestrator | 2025-09-19 00:21:16.244021 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-19 00:21:16.244040 | orchestrator | Friday 19 September 2025 00:20:02 +0000 (0:00:32.218) 0:00:37.427 ****** 2025-09-19 00:21:16.244057 | orchestrator | changed: [testbed-manager] 2025-09-19 00:21:16.244074 | orchestrator | 2025-09-19 00:21:16.244092 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-19 00:21:16.244110 | orchestrator | Friday 19 September 2025 00:20:15 +0000 (0:00:13.123) 0:00:50.551 ****** 2025-09-19 00:21:16.244129 | orchestrator | Pausing for 60 seconds 2025-09-19 00:21:16.244183 | orchestrator | changed: [testbed-manager] 2025-09-19 00:21:16.244204 | orchestrator | 2025-09-19 00:21:16.244221 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-19 00:21:16.244237 | orchestrator | Friday 19 September 2025 00:21:15 +0000 (0:01:00.080) 0:01:50.631 ****** 2025-09-19 00:21:16.244252 | orchestrator | ok: [testbed-manager] 2025-09-19 00:21:16.244268 | orchestrator | 2025-09-19 00:21:16.244287 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-19 00:21:16.244305 | orchestrator | Friday 19 September 2025 00:21:15 +0000 (0:00:00.072) 0:01:50.704 ****** 2025-09-19 00:21:16.244324 | orchestrator | changed: [testbed-manager] 2025-09-19 00:21:16.244342 | orchestrator | 2025-09-19 00:21:16.244360 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:21:16.244380 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:21:16.244394 | orchestrator | 2025-09-19 00:21:16.244404 | orchestrator | 2025-09-19 00:21:16.244415 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:21:16.244426 | orchestrator | Friday 19 September 2025 00:21:15 +0000 (0:00:00.640) 0:01:51.344 ****** 2025-09-19 00:21:16.244437 | orchestrator | =============================================================================== 2025-09-19 00:21:16.244447 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-19 00:21:16.244458 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.22s 2025-09-19 00:21:16.244469 | orchestrator | osism.services.squid : Restart squid service --------------------------- 13.12s 2025-09-19 00:21:16.244480 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.44s 2025-09-19 00:21:16.244491 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-09-19 00:21:16.244502 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-09-19 00:21:16.244512 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2025-09-19 00:21:16.244523 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-09-19 00:21:16.244534 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-09-19 00:21:16.244544 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-19 00:21:16.244555 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-19 00:21:16.501172 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 00:21:16.501266 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-09-19 00:21:16.504287 | orchestrator | ++ semver 9.2.0 9.0.0 2025-09-19 00:21:16.562646 | orchestrator | + [[ 1 -lt 0 ]] 2025-09-19 00:21:16.563443 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-19 00:21:28.494801 | orchestrator | 2025-09-19 00:21:28 | INFO  | Task 4a4a9966-d792-4d51-bdc3-3d8d3a5b0977 (operator) was prepared for execution. 2025-09-19 00:21:28.494903 | orchestrator | 2025-09-19 00:21:28 | INFO  | It takes a moment until task 4a4a9966-d792-4d51-bdc3-3d8d3a5b0977 (operator) has been started and output is visible here. 2025-09-19 00:21:43.532680 | orchestrator | 2025-09-19 00:21:43.532910 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-19 00:21:43.532940 | orchestrator | 2025-09-19 00:21:43.532959 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 00:21:43.532977 | orchestrator | Friday 19 September 2025 00:21:32 +0000 (0:00:00.145) 0:00:00.145 ****** 2025-09-19 00:21:43.532995 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:21:43.533013 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:21:43.533032 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:21:43.533050 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:21:43.533068 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:21:43.533115 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:21:43.533133 | orchestrator | 2025-09-19 00:21:43.533151 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-19 00:21:43.533169 | orchestrator | Friday 19 September 2025 00:21:35 +0000 (0:00:03.253) 0:00:03.398 ****** 2025-09-19 00:21:43.533185 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:21:43.533203 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:21:43.533222 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:21:43.533241 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:21:43.533262 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:21:43.533282 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:21:43.533300 | orchestrator | 2025-09-19 00:21:43.533320 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-19 00:21:43.533340 | orchestrator | 2025-09-19 00:21:43.533359 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 00:21:43.533379 | orchestrator | Friday 19 September 2025 00:21:36 +0000 (0:00:00.699) 0:00:04.097 ****** 2025-09-19 00:21:43.533397 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:21:43.533417 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:21:43.533436 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:21:43.533455 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:21:43.533474 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:21:43.533494 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:21:43.533513 | orchestrator | 2025-09-19 00:21:43.533533 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 00:21:43.533553 | orchestrator | Friday 19 September 2025 00:21:36 +0000 (0:00:00.157) 0:00:04.255 ****** 2025-09-19 00:21:43.533571 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:21:43.533587 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:21:43.533604 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:21:43.533620 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:21:43.533636 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:21:43.533651 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:21:43.533667 | orchestrator | 2025-09-19 00:21:43.533682 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 00:21:43.533699 | orchestrator | Friday 19 September 2025 00:21:36 +0000 (0:00:00.173) 0:00:04.428 ****** 2025-09-19 00:21:43.533716 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:21:43.533761 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:21:43.533778 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:21:43.533794 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:21:43.533811 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:21:43.533829 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:21:43.533846 | orchestrator | 2025-09-19 00:21:43.533864 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 00:21:43.533881 | orchestrator | Friday 19 September 2025 00:21:37 +0000 (0:00:00.607) 0:00:05.036 ****** 2025-09-19 00:21:43.533898 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:21:43.533915 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:21:43.533931 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:21:43.533947 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:21:43.533963 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:21:43.533980 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:21:43.533996 | orchestrator | 2025-09-19 00:21:43.534013 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 00:21:43.534090 | orchestrator | Friday 19 September 2025 00:21:37 +0000 (0:00:00.747) 0:00:05.783 ****** 2025-09-19 00:21:43.534101 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-19 00:21:43.534111 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-19 00:21:43.534121 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-19 00:21:43.534131 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-19 00:21:43.534141 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-19 00:21:43.534150 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-19 00:21:43.534160 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-19 00:21:43.534184 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-19 00:21:43.534195 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-19 00:21:43.534204 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-19 00:21:43.534214 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-19 00:21:43.534224 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-19 00:21:43.534233 | orchestrator | 2025-09-19 00:21:43.534247 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 00:21:43.534257 | orchestrator | Friday 19 September 2025 00:21:38 +0000 (0:00:01.129) 0:00:06.912 ****** 2025-09-19 00:21:43.534267 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:21:43.534277 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:21:43.534286 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:21:43.534296 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:21:43.534306 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:21:43.534315 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:21:43.534325 | orchestrator | 2025-09-19 00:21:43.534334 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 00:21:43.534344 | orchestrator | Friday 19 September 2025 00:21:40 +0000 (0:00:01.280) 0:00:08.193 ****** 2025-09-19 00:21:43.534354 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-19 00:21:43.534364 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-19 00:21:43.534374 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-19 00:21:43.534384 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:21:43.534415 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:21:43.534426 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:21:43.534435 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:21:43.534445 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:21:43.534455 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 00:21:43.534464 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-19 00:21:43.534474 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-19 00:21:43.534501 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-19 00:21:43.534511 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-19 00:21:43.534521 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-19 00:21:43.534531 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-19 00:21:43.534540 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:21:43.534554 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:21:43.534564 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:21:43.534573 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:21:43.534583 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:21:43.534592 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-19 00:21:43.534602 | orchestrator | 2025-09-19 00:21:43.534611 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 00:21:43.534623 | orchestrator | Friday 19 September 2025 00:21:41 +0000 (0:00:01.193) 0:00:09.386 ****** 2025-09-19 00:21:43.534632 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:21:43.534642 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:21:43.534651 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:21:43.534661 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:21:43.534671 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:21:43.534686 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:21:43.534696 | orchestrator | 2025-09-19 00:21:43.534706 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 00:21:43.534715 | orchestrator | Friday 19 September 2025 00:21:41 +0000 (0:00:00.161) 0:00:09.548 ****** 2025-09-19 00:21:43.534750 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:21:43.534767 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:21:43.534784 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:21:43.534799 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:21:43.534815 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:21:43.534825 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:21:43.534835 | orchestrator | 2025-09-19 00:21:43.534845 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 00:21:43.534855 | orchestrator | Friday 19 September 2025 00:21:42 +0000 (0:00:00.611) 0:00:10.159 ****** 2025-09-19 00:21:43.534864 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:21:43.534874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:21:43.534883 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:21:43.534893 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:21:43.534902 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:21:43.534912 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:21:43.534921 | orchestrator | 2025-09-19 00:21:43.534931 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 00:21:43.534941 | orchestrator | Friday 19 September 2025 00:21:42 +0000 (0:00:00.179) 0:00:10.338 ****** 2025-09-19 00:21:43.534950 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:21:43.534960 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 00:21:43.534969 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:21:43.534979 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:21:43.534988 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 00:21:43.534998 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:21:43.535007 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 00:21:43.535017 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:21:43.535027 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 00:21:43.535036 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:21:43.535046 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 00:21:43.535055 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:21:43.535065 | orchestrator | 2025-09-19 00:21:43.535074 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 00:21:43.535084 | orchestrator | Friday 19 September 2025 00:21:43 +0000 (0:00:00.707) 0:00:11.046 ****** 2025-09-19 00:21:43.535093 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:21:43.535103 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:21:43.535112 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:21:43.535122 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:21:43.535132 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:21:43.535141 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:21:43.535151 | orchestrator | 2025-09-19 00:21:43.535160 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 00:21:43.535170 | orchestrator | Friday 19 September 2025 00:21:43 +0000 (0:00:00.170) 0:00:11.216 ****** 2025-09-19 00:21:43.535180 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:21:43.535189 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:21:43.535199 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:21:43.535210 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:21:43.535221 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:21:43.535231 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:21:43.535242 | orchestrator | 2025-09-19 00:21:43.535253 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 00:21:43.535264 | orchestrator | Friday 19 September 2025 00:21:43 +0000 (0:00:00.139) 0:00:11.356 ****** 2025-09-19 00:21:43.535275 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:21:43.535293 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:21:43.535304 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:21:43.535315 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:21:43.535334 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:21:44.677144 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:21:44.677262 | orchestrator | 2025-09-19 00:21:44.677286 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 00:21:44.677304 | orchestrator | Friday 19 September 2025 00:21:43 +0000 (0:00:00.145) 0:00:11.501 ****** 2025-09-19 00:21:44.677321 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:21:44.677337 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:21:44.677352 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:21:44.677367 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:21:44.677382 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:21:44.677398 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:21:44.677412 | orchestrator | 2025-09-19 00:21:44.677428 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 00:21:44.677442 | orchestrator | Friday 19 September 2025 00:21:44 +0000 (0:00:00.717) 0:00:12.219 ****** 2025-09-19 00:21:44.677459 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:21:44.677474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:21:44.677490 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:21:44.677506 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:21:44.677522 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:21:44.677539 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:21:44.677555 | orchestrator | 2025-09-19 00:21:44.677572 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:21:44.677589 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:21:44.677608 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:21:44.677624 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:21:44.677640 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:21:44.677655 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:21:44.677671 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:21:44.677687 | orchestrator | 2025-09-19 00:21:44.677704 | orchestrator | 2025-09-19 00:21:44.677753 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:21:44.677773 | orchestrator | Friday 19 September 2025 00:21:44 +0000 (0:00:00.214) 0:00:12.434 ****** 2025-09-19 00:21:44.677784 | orchestrator | =============================================================================== 2025-09-19 00:21:44.677795 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2025-09-19 00:21:44.677804 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2025-09-19 00:21:44.677814 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.19s 2025-09-19 00:21:44.677825 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.13s 2025-09-19 00:21:44.677834 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.75s 2025-09-19 00:21:44.677844 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2025-09-19 00:21:44.677853 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-09-19 00:21:44.677893 | orchestrator | Do not require tty for all users ---------------------------------------- 0.70s 2025-09-19 00:21:44.677903 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2025-09-19 00:21:44.677912 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-09-19 00:21:44.677922 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-09-19 00:21:44.677932 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-09-19 00:21:44.677941 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-09-19 00:21:44.677951 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-09-19 00:21:44.677961 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-09-19 00:21:44.677971 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-09-19 00:21:44.677980 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-19 00:21:44.677990 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-09-19 00:21:44.924504 | orchestrator | + osism apply --environment custom facts 2025-09-19 00:21:46.685989 | orchestrator | 2025-09-19 00:21:46 | INFO  | Trying to run play facts in environment custom 2025-09-19 00:21:56.855619 | orchestrator | 2025-09-19 00:21:56 | INFO  | Task f85637f6-1692-4f0b-8cee-4581320f6e9c (facts) was prepared for execution. 2025-09-19 00:21:56.855714 | orchestrator | 2025-09-19 00:21:56 | INFO  | It takes a moment until task f85637f6-1692-4f0b-8cee-4581320f6e9c (facts) has been started and output is visible here. 2025-09-19 00:22:41.294107 | orchestrator | 2025-09-19 00:22:41.294223 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-19 00:22:41.294236 | orchestrator | 2025-09-19 00:22:41.294245 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 00:22:41.294254 | orchestrator | Friday 19 September 2025 00:22:00 +0000 (0:00:00.111) 0:00:00.111 ****** 2025-09-19 00:22:41.294311 | orchestrator | ok: [testbed-manager] 2025-09-19 00:22:41.294323 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:22:41.294332 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:22:41.294341 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:22:41.294350 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:22:41.294358 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:22:41.294366 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:22:41.294385 | orchestrator | 2025-09-19 00:22:41.294394 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-19 00:22:41.294428 | orchestrator | Friday 19 September 2025 00:22:02 +0000 (0:00:01.444) 0:00:01.555 ****** 2025-09-19 00:22:41.294436 | orchestrator | ok: [testbed-manager] 2025-09-19 00:22:41.294445 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:22:41.294453 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:22:41.294461 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:22:41.294472 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:22:41.294481 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:22:41.294489 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:22:41.294496 | orchestrator | 2025-09-19 00:22:41.294504 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-19 00:22:41.294512 | orchestrator | 2025-09-19 00:22:41.294520 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 00:22:41.294529 | orchestrator | Friday 19 September 2025 00:22:03 +0000 (0:00:01.167) 0:00:02.723 ****** 2025-09-19 00:22:41.294537 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.294545 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.294552 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.294560 | orchestrator | 2025-09-19 00:22:41.294568 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 00:22:41.294577 | orchestrator | Friday 19 September 2025 00:22:03 +0000 (0:00:00.124) 0:00:02.847 ****** 2025-09-19 00:22:41.294604 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.294612 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.294621 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.294631 | orchestrator | 2025-09-19 00:22:41.294640 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 00:22:41.294649 | orchestrator | Friday 19 September 2025 00:22:03 +0000 (0:00:00.206) 0:00:03.054 ****** 2025-09-19 00:22:41.294658 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.294666 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.294675 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.294684 | orchestrator | 2025-09-19 00:22:41.294693 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 00:22:41.294725 | orchestrator | Friday 19 September 2025 00:22:03 +0000 (0:00:00.209) 0:00:03.263 ****** 2025-09-19 00:22:41.294735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:22:41.294745 | orchestrator | 2025-09-19 00:22:41.294754 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 00:22:41.294763 | orchestrator | Friday 19 September 2025 00:22:04 +0000 (0:00:00.161) 0:00:03.424 ****** 2025-09-19 00:22:41.294772 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.294780 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.294789 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.294798 | orchestrator | 2025-09-19 00:22:41.294807 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 00:22:41.294816 | orchestrator | Friday 19 September 2025 00:22:04 +0000 (0:00:00.445) 0:00:03.869 ****** 2025-09-19 00:22:41.294825 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:22:41.294834 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:22:41.294843 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:22:41.294852 | orchestrator | 2025-09-19 00:22:41.294860 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 00:22:41.294870 | orchestrator | Friday 19 September 2025 00:22:04 +0000 (0:00:00.114) 0:00:03.984 ****** 2025-09-19 00:22:41.294879 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:22:41.294888 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:22:41.294897 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:22:41.294906 | orchestrator | 2025-09-19 00:22:41.294915 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 00:22:41.294924 | orchestrator | Friday 19 September 2025 00:22:05 +0000 (0:00:01.088) 0:00:05.072 ****** 2025-09-19 00:22:41.294933 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.294942 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.294951 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.294960 | orchestrator | 2025-09-19 00:22:41.294969 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 00:22:41.294978 | orchestrator | Friday 19 September 2025 00:22:06 +0000 (0:00:00.506) 0:00:05.579 ****** 2025-09-19 00:22:41.294987 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:22:41.294997 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:22:41.295006 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:22:41.295015 | orchestrator | 2025-09-19 00:22:41.295023 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 00:22:41.295031 | orchestrator | Friday 19 September 2025 00:22:07 +0000 (0:00:01.085) 0:00:06.665 ****** 2025-09-19 00:22:41.295038 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:22:41.295046 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:22:41.295054 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:22:41.295062 | orchestrator | 2025-09-19 00:22:41.295070 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-19 00:22:41.295078 | orchestrator | Friday 19 September 2025 00:22:25 +0000 (0:00:18.035) 0:00:24.700 ****** 2025-09-19 00:22:41.295085 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:22:41.295100 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:22:41.295107 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:22:41.295115 | orchestrator | 2025-09-19 00:22:41.295123 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-19 00:22:41.295145 | orchestrator | Friday 19 September 2025 00:22:25 +0000 (0:00:00.097) 0:00:24.797 ****** 2025-09-19 00:22:41.295154 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:22:41.295162 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:22:41.295170 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:22:41.295178 | orchestrator | 2025-09-19 00:22:41.295185 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 00:22:41.295193 | orchestrator | Friday 19 September 2025 00:22:32 +0000 (0:00:07.162) 0:00:31.960 ****** 2025-09-19 00:22:41.295201 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.295209 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.295217 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.295224 | orchestrator | 2025-09-19 00:22:41.295232 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 00:22:41.295240 | orchestrator | Friday 19 September 2025 00:22:32 +0000 (0:00:00.408) 0:00:32.368 ****** 2025-09-19 00:22:41.295248 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-19 00:22:41.295256 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-19 00:22:41.295264 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-19 00:22:41.295276 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-19 00:22:41.295284 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-19 00:22:41.295291 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-19 00:22:41.295299 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-19 00:22:41.295307 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-19 00:22:41.295314 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-19 00:22:41.295322 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-19 00:22:41.295330 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-19 00:22:41.295338 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-19 00:22:41.295346 | orchestrator | 2025-09-19 00:22:41.295353 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 00:22:41.295361 | orchestrator | Friday 19 September 2025 00:22:36 +0000 (0:00:03.192) 0:00:35.561 ****** 2025-09-19 00:22:41.295369 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.295377 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.295385 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.295392 | orchestrator | 2025-09-19 00:22:41.295400 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 00:22:41.295408 | orchestrator | 2025-09-19 00:22:41.295416 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 00:22:41.295424 | orchestrator | Friday 19 September 2025 00:22:37 +0000 (0:00:01.225) 0:00:36.786 ****** 2025-09-19 00:22:41.295432 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:22:41.295439 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:22:41.295447 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:22:41.295455 | orchestrator | ok: [testbed-manager] 2025-09-19 00:22:41.295463 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:22:41.295471 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:22:41.295478 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:22:41.295486 | orchestrator | 2025-09-19 00:22:41.295494 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:22:41.295502 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:22:41.295511 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:22:41.295525 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:22:41.295533 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:22:41.295541 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:22:41.295549 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:22:41.295557 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:22:41.295564 | orchestrator | 2025-09-19 00:22:41.295572 | orchestrator | 2025-09-19 00:22:41.295580 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:22:41.295588 | orchestrator | Friday 19 September 2025 00:22:41 +0000 (0:00:03.866) 0:00:40.653 ****** 2025-09-19 00:22:41.295596 | orchestrator | =============================================================================== 2025-09-19 00:22:41.295604 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.04s 2025-09-19 00:22:41.295611 | orchestrator | Install required packages (Debian) -------------------------------------- 7.16s 2025-09-19 00:22:41.295619 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.87s 2025-09-19 00:22:41.295627 | orchestrator | Copy fact files --------------------------------------------------------- 3.19s 2025-09-19 00:22:41.295635 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-09-19 00:22:41.295643 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-09-19 00:22:41.295654 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2025-09-19 00:22:41.500025 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2025-09-19 00:22:41.500112 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2025-09-19 00:22:41.500124 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-09-19 00:22:41.500135 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-09-19 00:22:41.500144 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-09-19 00:22:41.500154 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-09-19 00:22:41.500164 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-09-19 00:22:41.500173 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-09-19 00:22:41.500184 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-19 00:22:41.500194 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-09-19 00:22:41.500203 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-19 00:22:41.790804 | orchestrator | + osism apply bootstrap 2025-09-19 00:22:53.696155 | orchestrator | 2025-09-19 00:22:53 | INFO  | Task bae558dd-9ad2-48b4-9a85-836617564079 (bootstrap) was prepared for execution. 2025-09-19 00:22:53.696269 | orchestrator | 2025-09-19 00:22:53 | INFO  | It takes a moment until task bae558dd-9ad2-48b4-9a85-836617564079 (bootstrap) has been started and output is visible here. 2025-09-19 00:23:09.565957 | orchestrator | 2025-09-19 00:23:09.566124 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-19 00:23:09.566143 | orchestrator | 2025-09-19 00:23:09.566155 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-19 00:23:09.566189 | orchestrator | Friday 19 September 2025 00:22:57 +0000 (0:00:00.161) 0:00:00.161 ****** 2025-09-19 00:23:09.566201 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:09.566214 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:09.566225 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:09.566236 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:09.566248 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:09.566259 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:09.566269 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:09.566280 | orchestrator | 2025-09-19 00:23:09.566308 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 00:23:09.566320 | orchestrator | 2025-09-19 00:23:09.566331 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 00:23:09.566342 | orchestrator | Friday 19 September 2025 00:22:57 +0000 (0:00:00.227) 0:00:00.389 ****** 2025-09-19 00:23:09.566353 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:09.566364 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:09.566376 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:09.566387 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:09.566398 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:09.566409 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:09.566420 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:09.566431 | orchestrator | 2025-09-19 00:23:09.566442 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-19 00:23:09.566453 | orchestrator | 2025-09-19 00:23:09.566464 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 00:23:09.566476 | orchestrator | Friday 19 September 2025 00:23:01 +0000 (0:00:03.797) 0:00:04.187 ****** 2025-09-19 00:23:09.566487 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 00:23:09.566501 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 00:23:09.566514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-19 00:23:09.566526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:23:09.566539 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 00:23:09.566552 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-19 00:23:09.566564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 00:23:09.566577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 00:23:09.566590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-19 00:23:09.566603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:23:09.566616 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 00:23:09.566629 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 00:23:09.566642 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 00:23:09.566653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:23:09.566664 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 00:23:09.566674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-19 00:23:09.566708 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 00:23:09.566719 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 00:23:09.566730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 00:23:09.566741 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 00:23:09.566752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-19 00:23:09.566763 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 00:23:09.566774 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:09.566786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 00:23:09.566796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-19 00:23:09.566815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 00:23:09.566826 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-19 00:23:09.566837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 00:23:09.566849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 00:23:09.566859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-19 00:23:09.566870 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 00:23:09.566881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 00:23:09.566892 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:23:09.566903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 00:23:09.566914 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 00:23:09.566925 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 00:23:09.566935 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-19 00:23:09.566946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:23:09.566962 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-19 00:23:09.566973 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 00:23:09.566984 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:23:09.566995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:23:09.567006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 00:23:09.567016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-19 00:23:09.567027 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-19 00:23:09.567038 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-19 00:23:09.567069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:23:09.567081 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:23:09.567091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-19 00:23:09.567102 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:23:09.567113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-19 00:23:09.567124 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-19 00:23:09.567134 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-19 00:23:09.567145 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-19 00:23:09.567156 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:23:09.567167 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:23:09.567178 | orchestrator | 2025-09-19 00:23:09.567189 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-19 00:23:09.567200 | orchestrator | 2025-09-19 00:23:09.567210 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-19 00:23:09.567222 | orchestrator | Friday 19 September 2025 00:23:02 +0000 (0:00:00.500) 0:00:04.687 ****** 2025-09-19 00:23:09.567233 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:09.567243 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:09.567254 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:09.567265 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:09.567276 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:09.567287 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:09.567298 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:09.567308 | orchestrator | 2025-09-19 00:23:09.567319 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-19 00:23:09.567330 | orchestrator | Friday 19 September 2025 00:23:03 +0000 (0:00:01.219) 0:00:05.907 ****** 2025-09-19 00:23:09.567341 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:09.567352 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:09.567363 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:09.567373 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:09.567384 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:09.567402 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:09.567413 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:09.567424 | orchestrator | 2025-09-19 00:23:09.567435 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-19 00:23:09.567446 | orchestrator | Friday 19 September 2025 00:23:04 +0000 (0:00:01.267) 0:00:07.174 ****** 2025-09-19 00:23:09.567458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:23:09.567471 | orchestrator | 2025-09-19 00:23:09.567482 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-19 00:23:09.567493 | orchestrator | Friday 19 September 2025 00:23:05 +0000 (0:00:00.268) 0:00:07.443 ****** 2025-09-19 00:23:09.567504 | orchestrator | changed: [testbed-manager] 2025-09-19 00:23:09.567515 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:09.567526 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:23:09.567537 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:09.567548 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:09.567558 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:23:09.567569 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:23:09.567580 | orchestrator | 2025-09-19 00:23:09.567591 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-19 00:23:09.567602 | orchestrator | Friday 19 September 2025 00:23:07 +0000 (0:00:02.061) 0:00:09.504 ****** 2025-09-19 00:23:09.567612 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:09.567625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:23:09.567638 | orchestrator | 2025-09-19 00:23:09.567649 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-19 00:23:09.567660 | orchestrator | Friday 19 September 2025 00:23:07 +0000 (0:00:00.289) 0:00:09.794 ****** 2025-09-19 00:23:09.567670 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:09.567681 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:09.567707 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:09.567718 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:23:09.567729 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:23:09.567740 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:23:09.567751 | orchestrator | 2025-09-19 00:23:09.567762 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-19 00:23:09.567773 | orchestrator | Friday 19 September 2025 00:23:08 +0000 (0:00:01.079) 0:00:10.874 ****** 2025-09-19 00:23:09.567784 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:09.567795 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:23:09.567806 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:09.567816 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:09.567827 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:23:09.567838 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:23:09.567849 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:09.567860 | orchestrator | 2025-09-19 00:23:09.567871 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-19 00:23:09.567882 | orchestrator | Friday 19 September 2025 00:23:08 +0000 (0:00:00.557) 0:00:11.432 ****** 2025-09-19 00:23:09.567893 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:23:09.567904 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:23:09.567914 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:23:09.567925 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:23:09.567936 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:23:09.567947 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:23:09.567957 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:09.567968 | orchestrator | 2025-09-19 00:23:09.567979 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 00:23:09.567997 | orchestrator | Friday 19 September 2025 00:23:09 +0000 (0:00:00.430) 0:00:11.863 ****** 2025-09-19 00:23:09.568009 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:09.568020 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:23:09.568037 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:23:22.084004 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:23:22.084108 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:23:22.084122 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:23:22.084133 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:23:22.084143 | orchestrator | 2025-09-19 00:23:22.084155 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 00:23:22.084166 | orchestrator | Friday 19 September 2025 00:23:09 +0000 (0:00:00.206) 0:00:12.070 ****** 2025-09-19 00:23:22.084178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:23:22.084222 | orchestrator | 2025-09-19 00:23:22.084242 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 00:23:22.084253 | orchestrator | Friday 19 September 2025 00:23:09 +0000 (0:00:00.287) 0:00:12.357 ****** 2025-09-19 00:23:22.084263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:23:22.084273 | orchestrator | 2025-09-19 00:23:22.084284 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 00:23:22.084294 | orchestrator | Friday 19 September 2025 00:23:10 +0000 (0:00:00.329) 0:00:12.687 ****** 2025-09-19 00:23:22.084304 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.084316 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.084326 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.084336 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.084345 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.084355 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.084365 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.084374 | orchestrator | 2025-09-19 00:23:22.084385 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 00:23:22.084394 | orchestrator | Friday 19 September 2025 00:23:11 +0000 (0:00:01.446) 0:00:14.133 ****** 2025-09-19 00:23:22.084404 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:22.084414 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:23:22.084424 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:23:22.084433 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:23:22.084443 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:23:22.084453 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:23:22.084463 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:23:22.084472 | orchestrator | 2025-09-19 00:23:22.084482 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 00:23:22.084492 | orchestrator | Friday 19 September 2025 00:23:11 +0000 (0:00:00.216) 0:00:14.350 ****** 2025-09-19 00:23:22.084502 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.084512 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.084522 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.084532 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.084543 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.084555 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.084566 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.084577 | orchestrator | 2025-09-19 00:23:22.084589 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 00:23:22.084600 | orchestrator | Friday 19 September 2025 00:23:12 +0000 (0:00:00.659) 0:00:15.010 ****** 2025-09-19 00:23:22.084611 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:22.084645 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:23:22.084657 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:23:22.084668 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:23:22.084697 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:23:22.084709 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:23:22.084720 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:23:22.084732 | orchestrator | 2025-09-19 00:23:22.084743 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 00:23:22.084755 | orchestrator | Friday 19 September 2025 00:23:12 +0000 (0:00:00.217) 0:00:15.227 ****** 2025-09-19 00:23:22.084766 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.084815 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:22.084827 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:22.084838 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:22.084849 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:23:22.084860 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:23:22.084872 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:23:22.084883 | orchestrator | 2025-09-19 00:23:22.084895 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 00:23:22.084905 | orchestrator | Friday 19 September 2025 00:23:13 +0000 (0:00:00.552) 0:00:15.780 ****** 2025-09-19 00:23:22.084914 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.084924 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:22.084933 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:22.084944 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:23:22.084953 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:23:22.084963 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:22.084972 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:23:22.084982 | orchestrator | 2025-09-19 00:23:22.084992 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 00:23:22.085014 | orchestrator | Friday 19 September 2025 00:23:14 +0000 (0:00:01.138) 0:00:16.918 ****** 2025-09-19 00:23:22.085024 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085034 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.085044 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.085053 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085063 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.085073 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.085082 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.085092 | orchestrator | 2025-09-19 00:23:22.085102 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 00:23:22.085112 | orchestrator | Friday 19 September 2025 00:23:15 +0000 (0:00:01.281) 0:00:18.200 ****** 2025-09-19 00:23:22.085140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:23:22.085151 | orchestrator | 2025-09-19 00:23:22.085161 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 00:23:22.085170 | orchestrator | Friday 19 September 2025 00:23:16 +0000 (0:00:00.366) 0:00:18.566 ****** 2025-09-19 00:23:22.085180 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:22.085190 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:22.085200 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:22.085209 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:22.085219 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:23:22.085228 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:23:22.085238 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:23:22.085248 | orchestrator | 2025-09-19 00:23:22.085258 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 00:23:22.085267 | orchestrator | Friday 19 September 2025 00:23:17 +0000 (0:00:01.243) 0:00:19.809 ****** 2025-09-19 00:23:22.085277 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085294 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.085304 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.085314 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.085324 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085334 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.085343 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.085353 | orchestrator | 2025-09-19 00:23:22.085363 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 00:23:22.085373 | orchestrator | Friday 19 September 2025 00:23:17 +0000 (0:00:00.229) 0:00:20.038 ****** 2025-09-19 00:23:22.085382 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085392 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.085402 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.085411 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.085421 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085430 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.085440 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.085450 | orchestrator | 2025-09-19 00:23:22.085460 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 00:23:22.085469 | orchestrator | Friday 19 September 2025 00:23:17 +0000 (0:00:00.223) 0:00:20.262 ****** 2025-09-19 00:23:22.085479 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085489 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.085498 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.085508 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.085517 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085527 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.085537 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.085546 | orchestrator | 2025-09-19 00:23:22.085556 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 00:23:22.085566 | orchestrator | Friday 19 September 2025 00:23:18 +0000 (0:00:00.245) 0:00:20.508 ****** 2025-09-19 00:23:22.085577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:23:22.085588 | orchestrator | 2025-09-19 00:23:22.085598 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 00:23:22.085608 | orchestrator | Friday 19 September 2025 00:23:18 +0000 (0:00:00.270) 0:00:20.778 ****** 2025-09-19 00:23:22.085617 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085627 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.085637 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.085646 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.085656 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.085665 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.085675 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085712 | orchestrator | 2025-09-19 00:23:22.085722 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 00:23:22.085731 | orchestrator | Friday 19 September 2025 00:23:18 +0000 (0:00:00.617) 0:00:21.396 ****** 2025-09-19 00:23:22.085741 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:23:22.085751 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:23:22.085761 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:23:22.085770 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:23:22.085780 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:23:22.085789 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:23:22.085799 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:23:22.085809 | orchestrator | 2025-09-19 00:23:22.085818 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 00:23:22.085828 | orchestrator | Friday 19 September 2025 00:23:19 +0000 (0:00:00.235) 0:00:21.632 ****** 2025-09-19 00:23:22.085838 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085848 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.085858 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085867 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:22.085883 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.085893 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:23:22.085903 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:23:22.085913 | orchestrator | 2025-09-19 00:23:22.085922 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 00:23:22.085932 | orchestrator | Friday 19 September 2025 00:23:20 +0000 (0:00:01.073) 0:00:22.706 ****** 2025-09-19 00:23:22.085947 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.085956 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:23:22.085966 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:23:22.085976 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:23:22.085986 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.085995 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.086005 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:23:22.086073 | orchestrator | 2025-09-19 00:23:22.086084 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 00:23:22.086094 | orchestrator | Friday 19 September 2025 00:23:20 +0000 (0:00:00.590) 0:00:23.296 ****** 2025-09-19 00:23:22.086104 | orchestrator | ok: [testbed-manager] 2025-09-19 00:23:22.086114 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:23:22.086123 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:23:22.086133 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:23:22.086150 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:24:04.016457 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:24:04.016571 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.016588 | orchestrator | 2025-09-19 00:24:04.016601 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 00:24:04.016614 | orchestrator | Friday 19 September 2025 00:23:22 +0000 (0:00:01.208) 0:00:24.505 ****** 2025-09-19 00:24:04.016625 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.016637 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.016648 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.016720 | orchestrator | changed: [testbed-manager] 2025-09-19 00:24:04.016742 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:24:04.016754 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:24:04.016765 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:24:04.016776 | orchestrator | 2025-09-19 00:24:04.016787 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-19 00:24:04.016798 | orchestrator | Friday 19 September 2025 00:23:40 +0000 (0:00:17.934) 0:00:42.440 ****** 2025-09-19 00:24:04.016809 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.016820 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.016831 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.016842 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.016852 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.016863 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.016873 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.016884 | orchestrator | 2025-09-19 00:24:04.016895 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-19 00:24:04.016906 | orchestrator | Friday 19 September 2025 00:23:40 +0000 (0:00:00.236) 0:00:42.676 ****** 2025-09-19 00:24:04.016917 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.016927 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.016938 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.016949 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.016961 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.016979 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.016999 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.017017 | orchestrator | 2025-09-19 00:24:04.017035 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-19 00:24:04.017054 | orchestrator | Friday 19 September 2025 00:23:40 +0000 (0:00:00.217) 0:00:42.894 ****** 2025-09-19 00:24:04.017072 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.017090 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.017108 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.017157 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.017178 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.017198 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.017216 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.017235 | orchestrator | 2025-09-19 00:24:04.017249 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-19 00:24:04.017262 | orchestrator | Friday 19 September 2025 00:23:40 +0000 (0:00:00.208) 0:00:43.102 ****** 2025-09-19 00:24:04.017278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:24:04.017297 | orchestrator | 2025-09-19 00:24:04.017317 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-19 00:24:04.017336 | orchestrator | Friday 19 September 2025 00:23:40 +0000 (0:00:00.279) 0:00:43.381 ****** 2025-09-19 00:24:04.017354 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.017373 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.017391 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.017408 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.017428 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.017446 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.017463 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.017482 | orchestrator | 2025-09-19 00:24:04.017493 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-19 00:24:04.017505 | orchestrator | Friday 19 September 2025 00:23:42 +0000 (0:00:01.814) 0:00:45.196 ****** 2025-09-19 00:24:04.017516 | orchestrator | changed: [testbed-manager] 2025-09-19 00:24:04.017526 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:24:04.017537 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:24:04.017548 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:24:04.017559 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:24:04.017569 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:24:04.017580 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:24:04.017590 | orchestrator | 2025-09-19 00:24:04.017601 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-19 00:24:04.017612 | orchestrator | Friday 19 September 2025 00:23:43 +0000 (0:00:01.097) 0:00:46.294 ****** 2025-09-19 00:24:04.017622 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.017633 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.017644 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.017654 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.017695 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.017707 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.017717 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.017728 | orchestrator | 2025-09-19 00:24:04.017739 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-19 00:24:04.017750 | orchestrator | Friday 19 September 2025 00:23:44 +0000 (0:00:00.855) 0:00:47.149 ****** 2025-09-19 00:24:04.017761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:24:04.017774 | orchestrator | 2025-09-19 00:24:04.017785 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-19 00:24:04.017797 | orchestrator | Friday 19 September 2025 00:23:44 +0000 (0:00:00.278) 0:00:47.428 ****** 2025-09-19 00:24:04.017808 | orchestrator | changed: [testbed-manager] 2025-09-19 00:24:04.017819 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:24:04.017829 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:24:04.017840 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:24:04.017851 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:24:04.017861 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:24:04.017872 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:24:04.017883 | orchestrator | 2025-09-19 00:24:04.017928 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-19 00:24:04.017940 | orchestrator | Friday 19 September 2025 00:23:46 +0000 (0:00:01.084) 0:00:48.513 ****** 2025-09-19 00:24:04.017951 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:24:04.017961 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:24:04.017972 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:24:04.017983 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:24:04.017993 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:24:04.018004 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:24:04.018077 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:24:04.018090 | orchestrator | 2025-09-19 00:24:04.018101 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-19 00:24:04.018112 | orchestrator | Friday 19 September 2025 00:23:46 +0000 (0:00:00.297) 0:00:48.810 ****** 2025-09-19 00:24:04.018128 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:24:04.018146 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:24:04.018164 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:24:04.018182 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:24:04.018199 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:24:04.018215 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:24:04.018232 | orchestrator | changed: [testbed-manager] 2025-09-19 00:24:04.018249 | orchestrator | 2025-09-19 00:24:04.018267 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-19 00:24:04.018285 | orchestrator | Friday 19 September 2025 00:23:58 +0000 (0:00:11.852) 0:01:00.663 ****** 2025-09-19 00:24:04.018303 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.018321 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.018339 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.018357 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.018376 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.018394 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.018413 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.018430 | orchestrator | 2025-09-19 00:24:04.018448 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-19 00:24:04.018460 | orchestrator | Friday 19 September 2025 00:23:59 +0000 (0:00:01.438) 0:01:02.101 ****** 2025-09-19 00:24:04.018471 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.018481 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.018492 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.018508 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.018525 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.018542 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.018559 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.018576 | orchestrator | 2025-09-19 00:24:04.018593 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-19 00:24:04.018612 | orchestrator | Friday 19 September 2025 00:24:00 +0000 (0:00:00.914) 0:01:03.015 ****** 2025-09-19 00:24:04.018630 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.018647 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.018697 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.018718 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.018736 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.018755 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.018767 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.018777 | orchestrator | 2025-09-19 00:24:04.018788 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-19 00:24:04.018800 | orchestrator | Friday 19 September 2025 00:24:00 +0000 (0:00:00.227) 0:01:03.243 ****** 2025-09-19 00:24:04.018811 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.018822 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.018832 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.018843 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.018854 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.018864 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.018875 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.018902 | orchestrator | 2025-09-19 00:24:04.018930 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-19 00:24:04.018941 | orchestrator | Friday 19 September 2025 00:24:01 +0000 (0:00:00.217) 0:01:03.460 ****** 2025-09-19 00:24:04.018953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:24:04.018965 | orchestrator | 2025-09-19 00:24:04.018976 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-19 00:24:04.018987 | orchestrator | Friday 19 September 2025 00:24:01 +0000 (0:00:00.281) 0:01:03.741 ****** 2025-09-19 00:24:04.018998 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.019008 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.019019 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.019029 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.019040 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.019050 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.019061 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.019071 | orchestrator | 2025-09-19 00:24:04.019082 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-19 00:24:04.019093 | orchestrator | Friday 19 September 2025 00:24:03 +0000 (0:00:01.811) 0:01:05.553 ****** 2025-09-19 00:24:04.019104 | orchestrator | changed: [testbed-manager] 2025-09-19 00:24:04.019114 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:24:04.019125 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:24:04.019136 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:24:04.019146 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:24:04.019161 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:24:04.019172 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:24:04.019183 | orchestrator | 2025-09-19 00:24:04.019194 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-19 00:24:04.019205 | orchestrator | Friday 19 September 2025 00:24:03 +0000 (0:00:00.639) 0:01:06.193 ****** 2025-09-19 00:24:04.019216 | orchestrator | ok: [testbed-manager] 2025-09-19 00:24:04.019226 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:24:04.019237 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:24:04.019248 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:24:04.019258 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:24:04.019269 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:24:04.019279 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:24:04.019290 | orchestrator | 2025-09-19 00:24:04.019314 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-19 00:26:21.364109 | orchestrator | Friday 19 September 2025 00:24:04 +0000 (0:00:00.245) 0:01:06.438 ****** 2025-09-19 00:26:21.364217 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:21.364232 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:21.364243 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:21.364253 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:21.364263 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:21.364273 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:21.364282 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:21.364292 | orchestrator | 2025-09-19 00:26:21.364303 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-19 00:26:21.364313 | orchestrator | Friday 19 September 2025 00:24:05 +0000 (0:00:01.225) 0:01:07.663 ****** 2025-09-19 00:26:21.364323 | orchestrator | changed: [testbed-manager] 2025-09-19 00:26:21.364333 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:26:21.364343 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:26:21.364352 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:26:21.364362 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:26:21.364371 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:26:21.364381 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:26:21.364391 | orchestrator | 2025-09-19 00:26:21.364401 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-19 00:26:21.364432 | orchestrator | Friday 19 September 2025 00:24:07 +0000 (0:00:01.955) 0:01:09.619 ****** 2025-09-19 00:26:21.364442 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:21.364452 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:21.364462 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:21.364471 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:21.364481 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:21.364490 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:21.364499 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:21.364509 | orchestrator | 2025-09-19 00:26:21.364519 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-19 00:26:21.364528 | orchestrator | Friday 19 September 2025 00:24:09 +0000 (0:00:02.527) 0:01:12.147 ****** 2025-09-19 00:26:21.364538 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:21.364548 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:21.364557 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:21.364566 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:21.364576 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:21.364585 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:21.364594 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:21.364629 | orchestrator | 2025-09-19 00:26:21.364640 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-19 00:26:21.364650 | orchestrator | Friday 19 September 2025 00:24:51 +0000 (0:00:41.781) 0:01:53.928 ****** 2025-09-19 00:26:21.364662 | orchestrator | changed: [testbed-manager] 2025-09-19 00:26:21.364673 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:26:21.364684 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:26:21.364695 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:26:21.364706 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:26:21.364717 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:26:21.364727 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:26:21.364738 | orchestrator | 2025-09-19 00:26:21.364749 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-19 00:26:21.364760 | orchestrator | Friday 19 September 2025 00:26:06 +0000 (0:01:15.003) 0:03:08.932 ****** 2025-09-19 00:26:21.364771 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:21.364782 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:21.364793 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:21.364803 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:21.364815 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:21.364826 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:21.364837 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:21.364847 | orchestrator | 2025-09-19 00:26:21.364858 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-19 00:26:21.364869 | orchestrator | Friday 19 September 2025 00:26:08 +0000 (0:00:01.723) 0:03:10.655 ****** 2025-09-19 00:26:21.364878 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:21.364888 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:21.364898 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:21.364907 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:21.364917 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:21.364926 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:21.364936 | orchestrator | changed: [testbed-manager] 2025-09-19 00:26:21.364945 | orchestrator | 2025-09-19 00:26:21.364955 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-19 00:26:21.364965 | orchestrator | Friday 19 September 2025 00:26:20 +0000 (0:00:11.969) 0:03:22.624 ****** 2025-09-19 00:26:21.364987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-19 00:26:21.365015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-19 00:26:21.365056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-19 00:26:21.365069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-19 00:26:21.365079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-19 00:26:21.365089 | orchestrator | 2025-09-19 00:26:21.365099 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-19 00:26:21.365109 | orchestrator | Friday 19 September 2025 00:26:20 +0000 (0:00:00.371) 0:03:22.996 ****** 2025-09-19 00:26:21.365119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 00:26:21.365129 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:26:21.365139 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 00:26:21.365148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:26:21.365158 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 00:26:21.365167 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:26:21.365177 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 00:26:21.365186 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:26:21.365196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 00:26:21.365206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 00:26:21.365215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 00:26:21.365225 | orchestrator | 2025-09-19 00:26:21.365235 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-19 00:26:21.365244 | orchestrator | Friday 19 September 2025 00:26:21 +0000 (0:00:00.565) 0:03:23.561 ****** 2025-09-19 00:26:21.365254 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 00:26:21.365265 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 00:26:21.365275 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 00:26:21.365285 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 00:26:21.365294 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 00:26:21.365304 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 00:26:21.365320 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 00:26:21.365329 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 00:26:21.365339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 00:26:21.365348 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 00:26:21.365358 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:26:21.365368 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 00:26:21.365377 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 00:26:21.365387 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 00:26:21.365396 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 00:26:21.365406 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 00:26:21.365416 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 00:26:21.365425 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 00:26:21.365435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 00:26:21.365445 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 00:26:21.365455 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 00:26:21.365470 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 00:26:30.703403 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 00:26:30.703525 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 00:26:30.703540 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 00:26:30.703553 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 00:26:30.703564 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 00:26:30.703576 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 00:26:30.703588 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 00:26:30.703658 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 00:26:30.703755 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:26:30.703771 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 00:26:30.703783 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:26:30.703794 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 00:26:30.703806 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 00:26:30.703817 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 00:26:30.703828 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 00:26:30.703839 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 00:26:30.703850 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 00:26:30.703886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 00:26:30.703898 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 00:26:30.703909 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 00:26:30.703919 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 00:26:30.703930 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:26:30.703942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 00:26:30.703954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 00:26:30.703966 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 00:26:30.703978 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 00:26:30.703991 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 00:26:30.704003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 00:26:30.704016 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 00:26:30.704028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 00:26:30.704041 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 00:26:30.704053 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 00:26:30.704064 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 00:26:30.704075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 00:26:30.704086 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 00:26:30.704096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 00:26:30.704107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 00:26:30.704140 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 00:26:30.704152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 00:26:30.704162 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 00:26:30.704173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 00:26:30.704185 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 00:26:30.704196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 00:26:30.704226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 00:26:30.704237 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 00:26:30.704248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 00:26:30.704259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 00:26:30.704270 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 00:26:30.704281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 00:26:30.704292 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 00:26:30.704311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 00:26:30.704322 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 00:26:30.704333 | orchestrator | 2025-09-19 00:26:30.704344 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-19 00:26:30.704356 | orchestrator | Friday 19 September 2025 00:26:24 +0000 (0:00:03.521) 0:03:27.083 ****** 2025-09-19 00:26:30.704367 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704378 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704399 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704410 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704436 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 00:26:30.704447 | orchestrator | 2025-09-19 00:26:30.704459 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-19 00:26:30.704470 | orchestrator | Friday 19 September 2025 00:26:27 +0000 (0:00:02.477) 0:03:29.560 ****** 2025-09-19 00:26:30.704481 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 00:26:30.704492 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:26:30.704503 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 00:26:30.704514 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:26:30.704525 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 00:26:30.704536 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:26:30.704547 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 00:26:30.704558 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:26:30.704569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 00:26:30.704580 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 00:26:30.704591 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 00:26:30.704625 | orchestrator | 2025-09-19 00:26:30.704637 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-19 00:26:30.704648 | orchestrator | Friday 19 September 2025 00:26:29 +0000 (0:00:02.581) 0:03:32.141 ****** 2025-09-19 00:26:30.704658 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 00:26:30.704669 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:26:30.704680 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 00:26:30.704691 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:26:30.704702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 00:26:30.704713 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:26:30.704724 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 00:26:30.704735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:26:30.704746 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 00:26:30.704762 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 00:26:30.704780 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 00:26:30.704791 | orchestrator | 2025-09-19 00:26:30.704802 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-19 00:26:30.704813 | orchestrator | Friday 19 September 2025 00:26:30 +0000 (0:00:00.669) 0:03:32.811 ****** 2025-09-19 00:26:30.704824 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:26:30.704835 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:26:30.704846 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:26:30.704857 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:26:30.704868 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:26:30.704885 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:26:42.454221 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:26:42.454335 | orchestrator | 2025-09-19 00:26:42.454351 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-19 00:26:42.454364 | orchestrator | Friday 19 September 2025 00:26:30 +0000 (0:00:00.320) 0:03:33.132 ****** 2025-09-19 00:26:42.454376 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:42.454388 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:42.454399 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:42.454410 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:42.454421 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:42.454431 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:42.454443 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:42.454453 | orchestrator | 2025-09-19 00:26:42.454465 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-19 00:26:42.454476 | orchestrator | Friday 19 September 2025 00:26:36 +0000 (0:00:05.674) 0:03:38.806 ****** 2025-09-19 00:26:42.454487 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-19 00:26:42.454498 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-19 00:26:42.454509 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:26:42.454520 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-19 00:26:42.454531 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:26:42.454542 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:26:42.454553 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-19 00:26:42.454564 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-19 00:26:42.454574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:26:42.454585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:26:42.454596 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-19 00:26:42.454670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:26:42.454684 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-19 00:26:42.454695 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:26:42.454706 | orchestrator | 2025-09-19 00:26:42.454718 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-19 00:26:42.454729 | orchestrator | Friday 19 September 2025 00:26:36 +0000 (0:00:00.294) 0:03:39.101 ****** 2025-09-19 00:26:42.454740 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-19 00:26:42.454751 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-19 00:26:42.454764 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-19 00:26:42.454789 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-19 00:26:42.454802 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-19 00:26:42.454827 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-19 00:26:42.454839 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-19 00:26:42.454852 | orchestrator | 2025-09-19 00:26:42.454864 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-19 00:26:42.454877 | orchestrator | Friday 19 September 2025 00:26:37 +0000 (0:00:01.042) 0:03:40.143 ****** 2025-09-19 00:26:42.454892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:26:42.454930 | orchestrator | 2025-09-19 00:26:42.454944 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-19 00:26:42.454957 | orchestrator | Friday 19 September 2025 00:26:38 +0000 (0:00:00.499) 0:03:40.643 ****** 2025-09-19 00:26:42.454969 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:42.454980 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:42.454991 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:42.455002 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:42.455013 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:42.455023 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:42.455034 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:42.455045 | orchestrator | 2025-09-19 00:26:42.455056 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-19 00:26:42.455067 | orchestrator | Friday 19 September 2025 00:26:39 +0000 (0:00:01.264) 0:03:41.908 ****** 2025-09-19 00:26:42.455078 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:42.455089 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:42.455100 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:42.455110 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:42.455121 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:42.455132 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:42.455143 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:42.455153 | orchestrator | 2025-09-19 00:26:42.455164 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-19 00:26:42.455175 | orchestrator | Friday 19 September 2025 00:26:40 +0000 (0:00:00.620) 0:03:42.528 ****** 2025-09-19 00:26:42.455186 | orchestrator | changed: [testbed-manager] 2025-09-19 00:26:42.455197 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:26:42.455208 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:26:42.455219 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:26:42.455230 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:26:42.455240 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:26:42.455251 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:26:42.455262 | orchestrator | 2025-09-19 00:26:42.455273 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-19 00:26:42.455284 | orchestrator | Friday 19 September 2025 00:26:40 +0000 (0:00:00.678) 0:03:43.207 ****** 2025-09-19 00:26:42.455309 | orchestrator | ok: [testbed-manager] 2025-09-19 00:26:42.455320 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:26:42.455331 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:26:42.455342 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:26:42.455353 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:26:42.455364 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:26:42.455375 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:26:42.455386 | orchestrator | 2025-09-19 00:26:42.455397 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-19 00:26:42.455408 | orchestrator | Friday 19 September 2025 00:26:41 +0000 (0:00:00.630) 0:03:43.838 ****** 2025-09-19 00:26:42.455442 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240232.313668, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455458 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240277.7324803, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455479 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240277.518871, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455491 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240277.7048848, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455503 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240258.9243999, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455514 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240279.083812, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455526 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758240268.68674, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:26:42.455554 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.917947 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.918144 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.918165 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.918196 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.918208 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.918225 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 00:27:08.918237 | orchestrator | 2025-09-19 00:27:08.918251 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-19 00:27:08.918263 | orchestrator | Friday 19 September 2025 00:26:42 +0000 (0:00:01.031) 0:03:44.870 ****** 2025-09-19 00:27:08.918275 | orchestrator | changed: [testbed-manager] 2025-09-19 00:27:08.918286 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:27:08.918297 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:27:08.918308 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:27:08.918318 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:27:08.918329 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:27:08.918340 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:27:08.918350 | orchestrator | 2025-09-19 00:27:08.918362 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-19 00:27:08.918373 | orchestrator | Friday 19 September 2025 00:26:43 +0000 (0:00:01.234) 0:03:46.104 ****** 2025-09-19 00:27:08.918392 | orchestrator | changed: [testbed-manager] 2025-09-19 00:27:08.918403 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:27:08.918414 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:27:08.918425 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:27:08.918454 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:27:08.918465 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:27:08.918476 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:27:08.918487 | orchestrator | 2025-09-19 00:27:08.918497 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-19 00:27:08.918509 | orchestrator | Friday 19 September 2025 00:26:44 +0000 (0:00:01.197) 0:03:47.302 ****** 2025-09-19 00:27:08.918519 | orchestrator | changed: [testbed-manager] 2025-09-19 00:27:08.918530 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:27:08.918541 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:27:08.918551 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:27:08.918562 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:27:08.918573 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:27:08.918583 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:27:08.918594 | orchestrator | 2025-09-19 00:27:08.918605 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-19 00:27:08.918641 | orchestrator | Friday 19 September 2025 00:26:46 +0000 (0:00:01.159) 0:03:48.461 ****** 2025-09-19 00:27:08.918652 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:27:08.918663 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:27:08.918674 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:27:08.918685 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:27:08.918695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:27:08.918706 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:27:08.918717 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:27:08.918728 | orchestrator | 2025-09-19 00:27:08.918739 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-19 00:27:08.918750 | orchestrator | Friday 19 September 2025 00:26:46 +0000 (0:00:00.277) 0:03:48.739 ****** 2025-09-19 00:27:08.918761 | orchestrator | ok: [testbed-manager] 2025-09-19 00:27:08.918773 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:27:08.918784 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:27:08.918795 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:27:08.918805 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:27:08.918816 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:27:08.918827 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:27:08.918838 | orchestrator | 2025-09-19 00:27:08.918849 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-19 00:27:08.918860 | orchestrator | Friday 19 September 2025 00:26:47 +0000 (0:00:00.769) 0:03:49.509 ****** 2025-09-19 00:27:08.918873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:27:08.918886 | orchestrator | 2025-09-19 00:27:08.918897 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-19 00:27:08.918908 | orchestrator | Friday 19 September 2025 00:26:47 +0000 (0:00:00.419) 0:03:49.928 ****** 2025-09-19 00:27:08.918919 | orchestrator | ok: [testbed-manager] 2025-09-19 00:27:08.918929 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:27:08.918940 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:27:08.918951 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:27:08.918962 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:27:08.918973 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:27:08.918984 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:27:08.918994 | orchestrator | 2025-09-19 00:27:08.919006 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-19 00:27:08.919016 | orchestrator | Friday 19 September 2025 00:26:56 +0000 (0:00:09.132) 0:03:59.060 ****** 2025-09-19 00:27:08.919027 | orchestrator | ok: [testbed-manager] 2025-09-19 00:27:08.919046 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:27:08.919057 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:27:08.919068 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:27:08.919079 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:27:08.919090 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:27:08.919100 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:27:08.919111 | orchestrator | 2025-09-19 00:27:08.919122 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-19 00:27:08.919132 | orchestrator | Friday 19 September 2025 00:26:58 +0000 (0:00:01.497) 0:04:00.558 ****** 2025-09-19 00:27:08.919143 | orchestrator | ok: [testbed-manager] 2025-09-19 00:27:08.919154 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:27:08.919164 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:27:08.919175 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:27:08.919186 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:27:08.919196 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:27:08.919207 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:27:08.919218 | orchestrator | 2025-09-19 00:27:08.919228 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-19 00:27:08.919239 | orchestrator | Friday 19 September 2025 00:26:59 +0000 (0:00:01.028) 0:04:01.586 ****** 2025-09-19 00:27:08.919256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:27:08.919267 | orchestrator | 2025-09-19 00:27:08.919278 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-19 00:27:08.919289 | orchestrator | Friday 19 September 2025 00:26:59 +0000 (0:00:00.503) 0:04:02.090 ****** 2025-09-19 00:27:08.919300 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:27:08.919311 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:27:08.919322 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:27:08.919332 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:27:08.919343 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:27:08.919354 | orchestrator | changed: [testbed-manager] 2025-09-19 00:27:08.919365 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:27:08.919375 | orchestrator | 2025-09-19 00:27:08.919386 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-19 00:27:08.919397 | orchestrator | Friday 19 September 2025 00:27:08 +0000 (0:00:08.640) 0:04:10.730 ****** 2025-09-19 00:27:08.919408 | orchestrator | changed: [testbed-manager] 2025-09-19 00:27:08.919419 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:27:08.919430 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:27:08.919447 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.683661 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.683776 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.683793 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.683806 | orchestrator | 2025-09-19 00:28:16.683818 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-19 00:28:16.683831 | orchestrator | Friday 19 September 2025 00:27:08 +0000 (0:00:00.604) 0:04:11.334 ****** 2025-09-19 00:28:16.683843 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:16.683854 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:16.683865 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:16.683876 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.683887 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.683897 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.683908 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.683919 | orchestrator | 2025-09-19 00:28:16.683930 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-19 00:28:16.683941 | orchestrator | Friday 19 September 2025 00:27:10 +0000 (0:00:01.168) 0:04:12.503 ****** 2025-09-19 00:28:16.683952 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:16.683963 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:16.683999 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:16.684011 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.684021 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.684032 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.684042 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.684053 | orchestrator | 2025-09-19 00:28:16.684064 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-19 00:28:16.684075 | orchestrator | Friday 19 September 2025 00:27:11 +0000 (0:00:01.064) 0:04:13.568 ****** 2025-09-19 00:28:16.684085 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:16.684097 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:16.684107 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:16.684118 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:16.684129 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:16.684139 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:16.684151 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:16.684163 | orchestrator | 2025-09-19 00:28:16.684175 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-19 00:28:16.684189 | orchestrator | Friday 19 September 2025 00:27:11 +0000 (0:00:00.298) 0:04:13.866 ****** 2025-09-19 00:28:16.684201 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:16.684213 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:16.684225 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:16.684237 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:16.684249 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:16.684261 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:16.684274 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:16.684286 | orchestrator | 2025-09-19 00:28:16.684298 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-19 00:28:16.684310 | orchestrator | Friday 19 September 2025 00:27:11 +0000 (0:00:00.311) 0:04:14.178 ****** 2025-09-19 00:28:16.684323 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:16.684335 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:16.684347 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:16.684359 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:16.684371 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:16.684383 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:16.684395 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:16.684406 | orchestrator | 2025-09-19 00:28:16.684419 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-19 00:28:16.684431 | orchestrator | Friday 19 September 2025 00:27:12 +0000 (0:00:00.317) 0:04:14.495 ****** 2025-09-19 00:28:16.684444 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:16.684456 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:16.684468 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:16.684479 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:16.684492 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:16.684504 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:16.684515 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:16.684526 | orchestrator | 2025-09-19 00:28:16.684536 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-19 00:28:16.684547 | orchestrator | Friday 19 September 2025 00:27:17 +0000 (0:00:05.889) 0:04:20.385 ****** 2025-09-19 00:28:16.684560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:28:16.684572 | orchestrator | 2025-09-19 00:28:16.684624 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-19 00:28:16.684636 | orchestrator | Friday 19 September 2025 00:27:18 +0000 (0:00:00.391) 0:04:20.776 ****** 2025-09-19 00:28:16.684647 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684658 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-19 00:28:16.684669 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684702 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-19 00:28:16.684715 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:16.684726 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:16.684736 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684747 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-19 00:28:16.684758 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:16.684780 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-19 00:28:16.684791 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:16.684802 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684812 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-19 00:28:16.684823 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684834 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-19 00:28:16.684845 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:16.684872 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:16.684883 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-19 00:28:16.684894 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-19 00:28:16.684905 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:16.684916 | orchestrator | 2025-09-19 00:28:16.684926 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-19 00:28:16.684937 | orchestrator | Friday 19 September 2025 00:27:18 +0000 (0:00:00.360) 0:04:21.136 ****** 2025-09-19 00:28:16.684949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:28:16.684960 | orchestrator | 2025-09-19 00:28:16.684971 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-19 00:28:16.684982 | orchestrator | Friday 19 September 2025 00:27:19 +0000 (0:00:00.365) 0:04:21.501 ****** 2025-09-19 00:28:16.684992 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-19 00:28:16.685003 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:16.685014 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-19 00:28:16.685024 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-19 00:28:16.685035 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:16.685046 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-19 00:28:16.685056 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:16.685067 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-19 00:28:16.685078 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:16.685088 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:16.685099 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-19 00:28:16.685110 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:16.685121 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-19 00:28:16.685131 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:16.685142 | orchestrator | 2025-09-19 00:28:16.685153 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-19 00:28:16.685164 | orchestrator | Friday 19 September 2025 00:27:19 +0000 (0:00:00.331) 0:04:21.833 ****** 2025-09-19 00:28:16.685175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:28:16.685186 | orchestrator | 2025-09-19 00:28:16.685197 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-19 00:28:16.685215 | orchestrator | Friday 19 September 2025 00:27:19 +0000 (0:00:00.508) 0:04:22.341 ****** 2025-09-19 00:28:16.685226 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:16.685237 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.685248 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:16.685258 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.685269 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.685280 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:16.685291 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.685301 | orchestrator | 2025-09-19 00:28:16.685312 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-19 00:28:16.685323 | orchestrator | Friday 19 September 2025 00:27:53 +0000 (0:00:33.797) 0:04:56.139 ****** 2025-09-19 00:28:16.685334 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:16.685344 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:16.685355 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.685366 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.685376 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:16.685387 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.685398 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.685408 | orchestrator | 2025-09-19 00:28:16.685419 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-19 00:28:16.685430 | orchestrator | Friday 19 September 2025 00:28:01 +0000 (0:00:07.832) 0:05:03.972 ****** 2025-09-19 00:28:16.685441 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:16.685451 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:16.685462 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.685473 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.685483 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.685494 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:16.685505 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.685515 | orchestrator | 2025-09-19 00:28:16.685526 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-19 00:28:16.685537 | orchestrator | Friday 19 September 2025 00:28:08 +0000 (0:00:07.455) 0:05:11.428 ****** 2025-09-19 00:28:16.685548 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:16.685559 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:16.685570 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:16.685598 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:16.685609 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:16.685619 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:16.685630 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:16.685641 | orchestrator | 2025-09-19 00:28:16.685652 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-19 00:28:16.685663 | orchestrator | Friday 19 September 2025 00:28:10 +0000 (0:00:01.716) 0:05:13.144 ****** 2025-09-19 00:28:16.685674 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:16.685685 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:16.685696 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:16.685707 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:16.685717 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:16.685728 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:16.685739 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:16.685750 | orchestrator | 2025-09-19 00:28:16.685761 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-19 00:28:16.685778 | orchestrator | Friday 19 September 2025 00:28:16 +0000 (0:00:05.955) 0:05:19.100 ****** 2025-09-19 00:28:28.021054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:28:28.021169 | orchestrator | 2025-09-19 00:28:28.021187 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-19 00:28:28.021225 | orchestrator | Friday 19 September 2025 00:28:17 +0000 (0:00:00.436) 0:05:19.536 ****** 2025-09-19 00:28:28.021238 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:28.021250 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:28.021261 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:28.021272 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:28.021282 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:28.021293 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:28.021304 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:28.021315 | orchestrator | 2025-09-19 00:28:28.021326 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-19 00:28:28.021337 | orchestrator | Friday 19 September 2025 00:28:17 +0000 (0:00:00.799) 0:05:20.335 ****** 2025-09-19 00:28:28.021348 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:28.021360 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:28.021371 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:28.021382 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:28.021393 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:28.021404 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:28.021414 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:28.021425 | orchestrator | 2025-09-19 00:28:28.021436 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-19 00:28:28.021447 | orchestrator | Friday 19 September 2025 00:28:19 +0000 (0:00:01.832) 0:05:22.168 ****** 2025-09-19 00:28:28.021458 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:28:28.021469 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:28:28.021480 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:28:28.021491 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:28:28.021502 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:28:28.021512 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:28:28.021523 | orchestrator | changed: [testbed-manager] 2025-09-19 00:28:28.021534 | orchestrator | 2025-09-19 00:28:28.021554 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-19 00:28:28.021571 | orchestrator | Friday 19 September 2025 00:28:20 +0000 (0:00:00.756) 0:05:22.924 ****** 2025-09-19 00:28:28.021618 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:28.021638 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:28.021658 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:28.021676 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:28.021691 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:28.021702 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:28.021713 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:28.021724 | orchestrator | 2025-09-19 00:28:28.021736 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-19 00:28:28.021746 | orchestrator | Friday 19 September 2025 00:28:20 +0000 (0:00:00.315) 0:05:23.240 ****** 2025-09-19 00:28:28.021757 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:28.021768 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:28.021779 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:28.021789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:28.021800 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:28.021811 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:28.021821 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:28.021832 | orchestrator | 2025-09-19 00:28:28.021843 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-19 00:28:28.021853 | orchestrator | Friday 19 September 2025 00:28:21 +0000 (0:00:00.380) 0:05:23.621 ****** 2025-09-19 00:28:28.021864 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:28.021875 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:28.021886 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:28.021897 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:28.021907 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:28.021918 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:28.021928 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:28.021949 | orchestrator | 2025-09-19 00:28:28.021979 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-19 00:28:28.021990 | orchestrator | Friday 19 September 2025 00:28:21 +0000 (0:00:00.331) 0:05:23.952 ****** 2025-09-19 00:28:28.022001 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:28.022012 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:28.022076 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:28.022087 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:28.022098 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:28.022109 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:28.022119 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:28.022130 | orchestrator | 2025-09-19 00:28:28.022141 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-19 00:28:28.022159 | orchestrator | Friday 19 September 2025 00:28:21 +0000 (0:00:00.274) 0:05:24.226 ****** 2025-09-19 00:28:28.022181 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:28.022192 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:28.022203 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:28.022214 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:28.022225 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:28.022235 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:28.022246 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:28.022257 | orchestrator | 2025-09-19 00:28:28.022269 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-19 00:28:28.022280 | orchestrator | Friday 19 September 2025 00:28:22 +0000 (0:00:00.326) 0:05:24.553 ****** 2025-09-19 00:28:28.022291 | orchestrator | ok: [testbed-manager] =>  2025-09-19 00:28:28.022302 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022313 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 00:28:28.022324 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022334 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 00:28:28.022345 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022356 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 00:28:28.022367 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022378 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 00:28:28.022389 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022419 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 00:28:28.022431 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022442 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 00:28:28.022453 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 00:28:28.022464 | orchestrator | 2025-09-19 00:28:28.022475 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-19 00:28:28.022486 | orchestrator | Friday 19 September 2025 00:28:22 +0000 (0:00:00.283) 0:05:24.837 ****** 2025-09-19 00:28:28.022497 | orchestrator | ok: [testbed-manager] =>  2025-09-19 00:28:28.022508 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022518 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 00:28:28.022529 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022540 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 00:28:28.022551 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022562 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 00:28:28.022593 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022613 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 00:28:28.022633 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022651 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 00:28:28.022667 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022678 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 00:28:28.022689 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 00:28:28.022700 | orchestrator | 2025-09-19 00:28:28.022711 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-19 00:28:28.022721 | orchestrator | Friday 19 September 2025 00:28:22 +0000 (0:00:00.413) 0:05:25.250 ****** 2025-09-19 00:28:28.022732 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:28.022752 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:28.022763 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:28.022774 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:28.022785 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:28.022796 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:28.022806 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:28.022817 | orchestrator | 2025-09-19 00:28:28.022828 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-19 00:28:28.022839 | orchestrator | Friday 19 September 2025 00:28:23 +0000 (0:00:00.256) 0:05:25.506 ****** 2025-09-19 00:28:28.022850 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:28.022861 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:28.022872 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:28.022883 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:28:28.022894 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:28:28.022904 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:28:28.022915 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:28:28.022926 | orchestrator | 2025-09-19 00:28:28.022937 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-19 00:28:28.022948 | orchestrator | Friday 19 September 2025 00:28:23 +0000 (0:00:00.262) 0:05:25.769 ****** 2025-09-19 00:28:28.022961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:28:28.022974 | orchestrator | 2025-09-19 00:28:28.022985 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-19 00:28:28.022996 | orchestrator | Friday 19 September 2025 00:28:23 +0000 (0:00:00.420) 0:05:26.189 ****** 2025-09-19 00:28:28.023007 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:28.023018 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:28.023029 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:28.023040 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:28.023050 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:28.023063 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:28.023083 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:28.023100 | orchestrator | 2025-09-19 00:28:28.023118 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-19 00:28:28.023136 | orchestrator | Friday 19 September 2025 00:28:24 +0000 (0:00:00.817) 0:05:27.007 ****** 2025-09-19 00:28:28.023152 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:28:28.023169 | orchestrator | ok: [testbed-manager] 2025-09-19 00:28:28.023186 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:28:28.023204 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:28:28.023223 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:28:28.023241 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:28:28.023261 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:28:28.023279 | orchestrator | 2025-09-19 00:28:28.023294 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-19 00:28:28.023307 | orchestrator | Friday 19 September 2025 00:28:27 +0000 (0:00:02.863) 0:05:29.870 ****** 2025-09-19 00:28:28.023318 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-19 00:28:28.023329 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-19 00:28:28.023340 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-19 00:28:28.023357 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-19 00:28:28.023368 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-19 00:28:28.023379 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-19 00:28:28.023390 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:28:28.023400 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-19 00:28:28.023411 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-19 00:28:28.023422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:28:28.023441 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-19 00:28:28.023452 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-19 00:28:28.023462 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-19 00:28:28.023475 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-19 00:28:28.023495 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:28:28.023513 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-19 00:28:28.023531 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-19 00:28:28.023559 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-19 00:29:29.131835 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:29.131948 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-19 00:29:29.131965 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-19 00:29:29.131977 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-19 00:29:29.131988 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:29.132000 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:29.132011 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-19 00:29:29.132022 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-19 00:29:29.132033 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-19 00:29:29.132044 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:29.132056 | orchestrator | 2025-09-19 00:29:29.132068 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-19 00:29:29.132080 | orchestrator | Friday 19 September 2025 00:28:28 +0000 (0:00:00.716) 0:05:30.586 ****** 2025-09-19 00:29:29.132091 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.132102 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.132114 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132125 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132135 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132146 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.132157 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132168 | orchestrator | 2025-09-19 00:29:29.132179 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-19 00:29:29.132190 | orchestrator | Friday 19 September 2025 00:28:35 +0000 (0:00:06.935) 0:05:37.522 ****** 2025-09-19 00:29:29.132201 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132212 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.132223 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132234 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132245 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.132256 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.132267 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132278 | orchestrator | 2025-09-19 00:29:29.132289 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-19 00:29:29.132300 | orchestrator | Friday 19 September 2025 00:28:36 +0000 (0:00:01.116) 0:05:38.638 ****** 2025-09-19 00:29:29.132312 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.132322 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132333 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132344 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132355 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.132366 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.132377 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132387 | orchestrator | 2025-09-19 00:29:29.132398 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-19 00:29:29.132409 | orchestrator | Friday 19 September 2025 00:28:44 +0000 (0:00:08.030) 0:05:46.669 ****** 2025-09-19 00:29:29.132420 | orchestrator | changed: [testbed-manager] 2025-09-19 00:29:29.132431 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132442 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132476 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132488 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.132499 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.132509 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132520 | orchestrator | 2025-09-19 00:29:29.132531 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-19 00:29:29.132542 | orchestrator | Friday 19 September 2025 00:28:47 +0000 (0:00:03.250) 0:05:49.920 ****** 2025-09-19 00:29:29.132603 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.132615 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132625 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132636 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132647 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.132658 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.132668 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132679 | orchestrator | 2025-09-19 00:29:29.132691 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-19 00:29:29.132702 | orchestrator | Friday 19 September 2025 00:28:49 +0000 (0:00:01.552) 0:05:51.473 ****** 2025-09-19 00:29:29.132713 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.132724 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132735 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132746 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132756 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.132767 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.132778 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132789 | orchestrator | 2025-09-19 00:29:29.132800 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-19 00:29:29.132810 | orchestrator | Friday 19 September 2025 00:28:50 +0000 (0:00:01.377) 0:05:52.850 ****** 2025-09-19 00:29:29.132821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:29.132832 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:29.132856 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:29.132868 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:29.132879 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:29.132889 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:29.132900 | orchestrator | changed: [testbed-manager] 2025-09-19 00:29:29.132911 | orchestrator | 2025-09-19 00:29:29.132922 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-19 00:29:29.132933 | orchestrator | Friday 19 September 2025 00:28:50 +0000 (0:00:00.568) 0:05:53.419 ****** 2025-09-19 00:29:29.132944 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.132955 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.132965 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.132976 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.132987 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.132997 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.133008 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.133019 | orchestrator | 2025-09-19 00:29:29.133030 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-19 00:29:29.133041 | orchestrator | Friday 19 September 2025 00:29:00 +0000 (0:00:09.936) 0:06:03.355 ****** 2025-09-19 00:29:29.133052 | orchestrator | changed: [testbed-manager] 2025-09-19 00:29:29.133080 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.133092 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.133103 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.133114 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.133124 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.133135 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.133146 | orchestrator | 2025-09-19 00:29:29.133157 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-19 00:29:29.133168 | orchestrator | Friday 19 September 2025 00:29:01 +0000 (0:00:00.911) 0:06:04.267 ****** 2025-09-19 00:29:29.133188 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.133199 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.133210 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.133220 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.133231 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.133242 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.133253 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.133264 | orchestrator | 2025-09-19 00:29:29.133275 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-19 00:29:29.133286 | orchestrator | Friday 19 September 2025 00:29:11 +0000 (0:00:09.387) 0:06:13.654 ****** 2025-09-19 00:29:29.133297 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.133308 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.133318 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.133329 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.133340 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.133351 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.133362 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.133372 | orchestrator | 2025-09-19 00:29:29.133383 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-19 00:29:29.133394 | orchestrator | Friday 19 September 2025 00:29:22 +0000 (0:00:11.193) 0:06:24.848 ****** 2025-09-19 00:29:29.133406 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-19 00:29:29.133417 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-19 00:29:29.133427 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-19 00:29:29.133438 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-19 00:29:29.133449 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-19 00:29:29.133460 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-19 00:29:29.133471 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-19 00:29:29.133481 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-19 00:29:29.133492 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-19 00:29:29.133503 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-19 00:29:29.133514 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-19 00:29:29.133524 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-19 00:29:29.133535 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-19 00:29:29.133566 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-19 00:29:29.133579 | orchestrator | 2025-09-19 00:29:29.133590 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-19 00:29:29.133601 | orchestrator | Friday 19 September 2025 00:29:23 +0000 (0:00:01.184) 0:06:26.032 ****** 2025-09-19 00:29:29.133611 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:29.133622 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:29.133633 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:29.133644 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:29.133655 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:29.133666 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:29.133676 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:29.133687 | orchestrator | 2025-09-19 00:29:29.133698 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-19 00:29:29.133709 | orchestrator | Friday 19 September 2025 00:29:24 +0000 (0:00:00.600) 0:06:26.632 ****** 2025-09-19 00:29:29.133720 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:29.133731 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:29.133742 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:29.133753 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:29.133764 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:29.133774 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:29.133785 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:29.133796 | orchestrator | 2025-09-19 00:29:29.133807 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-19 00:29:29.133827 | orchestrator | Friday 19 September 2025 00:29:28 +0000 (0:00:04.121) 0:06:30.754 ****** 2025-09-19 00:29:29.133840 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:29.133858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:29.133877 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:29.133894 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:29.133912 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:29.133929 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:29.133954 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:29.133973 | orchestrator | 2025-09-19 00:29:29.133991 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-19 00:29:29.134010 | orchestrator | Friday 19 September 2025 00:29:28 +0000 (0:00:00.512) 0:06:31.266 ****** 2025-09-19 00:29:29.134096 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-19 00:29:29.134108 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-19 00:29:29.134119 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:29.134130 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-19 00:29:29.134174 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-19 00:29:29.134188 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:29.134199 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-19 00:29:29.134210 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-19 00:29:29.134220 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:29.134231 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-19 00:29:29.134253 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-19 00:29:48.170725 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:48.170822 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-19 00:29:48.170834 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-19 00:29:48.170844 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:48.170853 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-19 00:29:48.170862 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-19 00:29:48.170870 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:48.170879 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-19 00:29:48.170888 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-19 00:29:48.170896 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:48.170906 | orchestrator | 2025-09-19 00:29:48.170915 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-19 00:29:48.170925 | orchestrator | Friday 19 September 2025 00:29:29 +0000 (0:00:00.552) 0:06:31.819 ****** 2025-09-19 00:29:48.170934 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:48.170943 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:48.170951 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:48.170960 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:48.170969 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:48.170977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:48.170986 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:48.170995 | orchestrator | 2025-09-19 00:29:48.171004 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-19 00:29:48.171013 | orchestrator | Friday 19 September 2025 00:29:29 +0000 (0:00:00.505) 0:06:32.324 ****** 2025-09-19 00:29:48.171022 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:48.171031 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:48.171039 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:48.171048 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:48.171057 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:48.171065 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:48.171094 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:48.171104 | orchestrator | 2025-09-19 00:29:48.171112 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-19 00:29:48.171121 | orchestrator | Friday 19 September 2025 00:29:30 +0000 (0:00:00.530) 0:06:32.855 ****** 2025-09-19 00:29:48.171130 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:48.171138 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:29:48.171147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:29:48.171155 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:29:48.171164 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:29:48.171172 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:29:48.171181 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:29:48.171189 | orchestrator | 2025-09-19 00:29:48.171198 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-19 00:29:48.171207 | orchestrator | Friday 19 September 2025 00:29:31 +0000 (0:00:00.696) 0:06:33.551 ****** 2025-09-19 00:29:48.171216 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.171225 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:29:48.171233 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:29:48.171242 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:29:48.171250 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:29:48.171259 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:29:48.171268 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:29:48.171278 | orchestrator | 2025-09-19 00:29:48.171289 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-19 00:29:48.171299 | orchestrator | Friday 19 September 2025 00:29:32 +0000 (0:00:01.704) 0:06:35.256 ****** 2025-09-19 00:29:48.171309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:29:48.171321 | orchestrator | 2025-09-19 00:29:48.171331 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-19 00:29:48.171341 | orchestrator | Friday 19 September 2025 00:29:33 +0000 (0:00:00.838) 0:06:36.095 ****** 2025-09-19 00:29:48.171352 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.171363 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:48.171373 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:48.171382 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:48.171392 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:48.171402 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:48.171411 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:48.171420 | orchestrator | 2025-09-19 00:29:48.171431 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-19 00:29:48.171441 | orchestrator | Friday 19 September 2025 00:29:34 +0000 (0:00:00.803) 0:06:36.898 ****** 2025-09-19 00:29:48.171451 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.171461 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:48.171471 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:48.171479 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:48.171488 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:48.171496 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:48.171505 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:48.171514 | orchestrator | 2025-09-19 00:29:48.171522 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-19 00:29:48.171531 | orchestrator | Friday 19 September 2025 00:29:35 +0000 (0:00:01.103) 0:06:38.002 ****** 2025-09-19 00:29:48.171563 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.171572 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:48.171580 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:48.171589 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:48.171597 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:48.171606 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:48.171615 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:48.171630 | orchestrator | 2025-09-19 00:29:48.171639 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-19 00:29:48.171648 | orchestrator | Friday 19 September 2025 00:29:36 +0000 (0:00:01.313) 0:06:39.316 ****** 2025-09-19 00:29:48.171670 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:29:48.171679 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:29:48.171687 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:29:48.171696 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:29:48.171705 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:29:48.171713 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:29:48.171722 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:29:48.171730 | orchestrator | 2025-09-19 00:29:48.171739 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-19 00:29:48.171748 | orchestrator | Friday 19 September 2025 00:29:38 +0000 (0:00:01.346) 0:06:40.662 ****** 2025-09-19 00:29:48.171757 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.171765 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:48.171774 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:48.171783 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:48.171791 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:48.171800 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:48.171808 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:48.171817 | orchestrator | 2025-09-19 00:29:48.171826 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-19 00:29:48.171834 | orchestrator | Friday 19 September 2025 00:29:39 +0000 (0:00:01.286) 0:06:41.949 ****** 2025-09-19 00:29:48.171843 | orchestrator | changed: [testbed-manager] 2025-09-19 00:29:48.171851 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:29:48.171860 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:29:48.171869 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:29:48.171877 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:29:48.171886 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:29:48.171894 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:29:48.171903 | orchestrator | 2025-09-19 00:29:48.171911 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-19 00:29:48.171920 | orchestrator | Friday 19 September 2025 00:29:41 +0000 (0:00:01.581) 0:06:43.530 ****** 2025-09-19 00:29:48.171946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:29:48.171955 | orchestrator | 2025-09-19 00:29:48.171964 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-19 00:29:48.171973 | orchestrator | Friday 19 September 2025 00:29:41 +0000 (0:00:00.884) 0:06:44.414 ****** 2025-09-19 00:29:48.171982 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.171990 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:29:48.171999 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:29:48.172007 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:29:48.172016 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:29:48.172025 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:29:48.172033 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:29:48.172041 | orchestrator | 2025-09-19 00:29:48.172050 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-19 00:29:48.172059 | orchestrator | Friday 19 September 2025 00:29:43 +0000 (0:00:01.362) 0:06:45.777 ****** 2025-09-19 00:29:48.172067 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.172076 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:29:48.172085 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:29:48.172093 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:29:48.172102 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:29:48.172110 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:29:48.172118 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:29:48.172127 | orchestrator | 2025-09-19 00:29:48.172136 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-19 00:29:48.172151 | orchestrator | Friday 19 September 2025 00:29:44 +0000 (0:00:01.115) 0:06:46.893 ****** 2025-09-19 00:29:48.172160 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.172168 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:29:48.172177 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:29:48.172186 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:29:48.172194 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:29:48.172203 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:29:48.172211 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:29:48.172220 | orchestrator | 2025-09-19 00:29:48.172228 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-19 00:29:48.172237 | orchestrator | Friday 19 September 2025 00:29:45 +0000 (0:00:01.367) 0:06:48.260 ****** 2025-09-19 00:29:48.172245 | orchestrator | ok: [testbed-manager] 2025-09-19 00:29:48.172254 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:29:48.172262 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:29:48.172271 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:29:48.172279 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:29:48.172288 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:29:48.172296 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:29:48.172305 | orchestrator | 2025-09-19 00:29:48.172313 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-19 00:29:48.172322 | orchestrator | Friday 19 September 2025 00:29:46 +0000 (0:00:01.127) 0:06:49.387 ****** 2025-09-19 00:29:48.172335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:29:48.172344 | orchestrator | 2025-09-19 00:29:48.172353 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:29:48.172362 | orchestrator | Friday 19 September 2025 00:29:47 +0000 (0:00:00.904) 0:06:50.291 ****** 2025-09-19 00:29:48.172370 | orchestrator | 2025-09-19 00:29:48.172379 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:29:48.172388 | orchestrator | Friday 19 September 2025 00:29:47 +0000 (0:00:00.040) 0:06:50.332 ****** 2025-09-19 00:29:48.172396 | orchestrator | 2025-09-19 00:29:48.172405 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:29:48.172413 | orchestrator | Friday 19 September 2025 00:29:47 +0000 (0:00:00.047) 0:06:50.380 ****** 2025-09-19 00:29:48.172422 | orchestrator | 2025-09-19 00:29:48.172431 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:29:48.172440 | orchestrator | Friday 19 September 2025 00:29:47 +0000 (0:00:00.038) 0:06:50.418 ****** 2025-09-19 00:29:48.172448 | orchestrator | 2025-09-19 00:29:48.172462 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:30:14.611451 | orchestrator | Friday 19 September 2025 00:29:48 +0000 (0:00:00.038) 0:06:50.457 ****** 2025-09-19 00:30:14.611636 | orchestrator | 2025-09-19 00:30:14.611657 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:30:14.611669 | orchestrator | Friday 19 September 2025 00:29:48 +0000 (0:00:00.046) 0:06:50.503 ****** 2025-09-19 00:30:14.611680 | orchestrator | 2025-09-19 00:30:14.611691 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 00:30:14.611702 | orchestrator | Friday 19 September 2025 00:29:48 +0000 (0:00:00.039) 0:06:50.543 ****** 2025-09-19 00:30:14.611713 | orchestrator | 2025-09-19 00:30:14.611724 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 00:30:14.611736 | orchestrator | Friday 19 September 2025 00:29:48 +0000 (0:00:00.039) 0:06:50.583 ****** 2025-09-19 00:30:14.611747 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:14.611759 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:14.611770 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:14.611781 | orchestrator | 2025-09-19 00:30:14.611792 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-19 00:30:14.611831 | orchestrator | Friday 19 September 2025 00:29:49 +0000 (0:00:01.192) 0:06:51.775 ****** 2025-09-19 00:30:14.611843 | orchestrator | changed: [testbed-manager] 2025-09-19 00:30:14.611854 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:14.611865 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:14.611876 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:14.611887 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:14.611898 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:14.611909 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:14.611920 | orchestrator | 2025-09-19 00:30:14.611930 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-19 00:30:14.611941 | orchestrator | Friday 19 September 2025 00:29:50 +0000 (0:00:01.459) 0:06:53.234 ****** 2025-09-19 00:30:14.611952 | orchestrator | changed: [testbed-manager] 2025-09-19 00:30:14.611963 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:14.611974 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:14.611985 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:14.611996 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:14.612007 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:14.612020 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:14.612032 | orchestrator | 2025-09-19 00:30:14.612044 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-19 00:30:14.612056 | orchestrator | Friday 19 September 2025 00:29:51 +0000 (0:00:01.088) 0:06:54.323 ****** 2025-09-19 00:30:14.612069 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:14.612081 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:14.612094 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:14.612107 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:14.612119 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:14.612132 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:14.612144 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:14.612156 | orchestrator | 2025-09-19 00:30:14.612168 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-19 00:30:14.612181 | orchestrator | Friday 19 September 2025 00:29:54 +0000 (0:00:02.386) 0:06:56.709 ****** 2025-09-19 00:30:14.612193 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:14.612205 | orchestrator | 2025-09-19 00:30:14.612217 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-19 00:30:14.612229 | orchestrator | Friday 19 September 2025 00:29:54 +0000 (0:00:00.094) 0:06:56.804 ****** 2025-09-19 00:30:14.612241 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:14.612254 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:14.612266 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:14.612278 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:14.612290 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:14.612302 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:14.612314 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:14.612327 | orchestrator | 2025-09-19 00:30:14.612339 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-19 00:30:14.612351 | orchestrator | Friday 19 September 2025 00:29:55 +0000 (0:00:01.018) 0:06:57.823 ****** 2025-09-19 00:30:14.612364 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:14.612377 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:14.612388 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:14.612398 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:14.612409 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:14.612420 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:14.612431 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:14.612442 | orchestrator | 2025-09-19 00:30:14.612452 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-19 00:30:14.612463 | orchestrator | Friday 19 September 2025 00:29:56 +0000 (0:00:00.725) 0:06:58.548 ****** 2025-09-19 00:30:14.612487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:30:14.612509 | orchestrator | 2025-09-19 00:30:14.612520 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-19 00:30:14.612551 | orchestrator | Friday 19 September 2025 00:29:56 +0000 (0:00:00.870) 0:06:59.419 ****** 2025-09-19 00:30:14.612562 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:14.612573 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:14.612584 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:14.612594 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:14.612605 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:14.612616 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:14.612627 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:14.612637 | orchestrator | 2025-09-19 00:30:14.612648 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-19 00:30:14.612659 | orchestrator | Friday 19 September 2025 00:29:57 +0000 (0:00:00.818) 0:07:00.238 ****** 2025-09-19 00:30:14.612670 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-19 00:30:14.612682 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-19 00:30:14.612710 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-19 00:30:14.612721 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-19 00:30:14.612733 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-19 00:30:14.612743 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-19 00:30:14.612754 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-19 00:30:14.612765 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-19 00:30:14.612776 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-19 00:30:14.612787 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-19 00:30:14.612798 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-19 00:30:14.612809 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-19 00:30:14.612819 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-19 00:30:14.612830 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-19 00:30:14.612841 | orchestrator | 2025-09-19 00:30:14.612852 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-19 00:30:14.612862 | orchestrator | Friday 19 September 2025 00:30:00 +0000 (0:00:02.736) 0:07:02.974 ****** 2025-09-19 00:30:14.612874 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:14.612884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:14.612895 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:14.612906 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:14.612917 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:14.612927 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:14.612938 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:14.612949 | orchestrator | 2025-09-19 00:30:14.612960 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-19 00:30:14.612971 | orchestrator | Friday 19 September 2025 00:30:01 +0000 (0:00:00.508) 0:07:03.482 ****** 2025-09-19 00:30:14.612984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:30:14.612996 | orchestrator | 2025-09-19 00:30:14.613007 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-19 00:30:14.613018 | orchestrator | Friday 19 September 2025 00:30:01 +0000 (0:00:00.856) 0:07:04.339 ****** 2025-09-19 00:30:14.613029 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:14.613039 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:14.613050 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:14.613068 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:14.613078 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:14.613089 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:14.613100 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:14.613111 | orchestrator | 2025-09-19 00:30:14.613122 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-19 00:30:14.613133 | orchestrator | Friday 19 September 2025 00:30:02 +0000 (0:00:01.060) 0:07:05.399 ****** 2025-09-19 00:30:14.613143 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:14.613154 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:14.613165 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:14.613176 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:14.613186 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:14.613197 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:14.613208 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:14.613218 | orchestrator | 2025-09-19 00:30:14.613229 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-19 00:30:14.613240 | orchestrator | Friday 19 September 2025 00:30:03 +0000 (0:00:00.796) 0:07:06.196 ****** 2025-09-19 00:30:14.613251 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:14.613262 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:14.613273 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:14.613284 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:14.613295 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:14.613306 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:14.613316 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:14.613327 | orchestrator | 2025-09-19 00:30:14.613338 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-19 00:30:14.613349 | orchestrator | Friday 19 September 2025 00:30:04 +0000 (0:00:00.489) 0:07:06.686 ****** 2025-09-19 00:30:14.613360 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:14.613370 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:14.613381 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:14.613392 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:14.613403 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:14.613413 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:14.613424 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:14.613435 | orchestrator | 2025-09-19 00:30:14.613450 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-19 00:30:14.613461 | orchestrator | Friday 19 September 2025 00:30:06 +0000 (0:00:01.922) 0:07:08.609 ****** 2025-09-19 00:30:14.613472 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:14.613483 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:14.613494 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:14.613505 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:14.613515 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:14.613539 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:14.613551 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:14.613561 | orchestrator | 2025-09-19 00:30:14.613572 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-19 00:30:14.613583 | orchestrator | Friday 19 September 2025 00:30:06 +0000 (0:00:00.418) 0:07:09.028 ****** 2025-09-19 00:30:14.613594 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:14.613604 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:14.613615 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:14.613626 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:14.613637 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:14.613648 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:14.613658 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:14.613669 | orchestrator | 2025-09-19 00:30:14.613687 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-19 00:30:46.386874 | orchestrator | Friday 19 September 2025 00:30:14 +0000 (0:00:08.004) 0:07:17.032 ****** 2025-09-19 00:30:46.386986 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387003 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:46.387040 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:46.387052 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:46.387063 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:46.387074 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:46.387084 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:46.387095 | orchestrator | 2025-09-19 00:30:46.387107 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-19 00:30:46.387118 | orchestrator | Friday 19 September 2025 00:30:15 +0000 (0:00:01.309) 0:07:18.342 ****** 2025-09-19 00:30:46.387129 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387140 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:46.387151 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:46.387162 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:46.387172 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:46.387183 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:46.387193 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:46.387204 | orchestrator | 2025-09-19 00:30:46.387215 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-19 00:30:46.387226 | orchestrator | Friday 19 September 2025 00:30:17 +0000 (0:00:01.660) 0:07:20.003 ****** 2025-09-19 00:30:46.387237 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387248 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:46.387258 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:46.387269 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:46.387280 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:46.387290 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:46.387301 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:46.387311 | orchestrator | 2025-09-19 00:30:46.387322 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 00:30:46.387333 | orchestrator | Friday 19 September 2025 00:30:19 +0000 (0:00:01.830) 0:07:21.833 ****** 2025-09-19 00:30:46.387344 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387355 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.387366 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.387377 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.387388 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.387399 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.387410 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.387422 | orchestrator | 2025-09-19 00:30:46.387434 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 00:30:46.387447 | orchestrator | Friday 19 September 2025 00:30:20 +0000 (0:00:00.844) 0:07:22.678 ****** 2025-09-19 00:30:46.387459 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:46.387471 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:46.387483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:46.387496 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:46.387508 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:46.387542 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:46.387555 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:46.387567 | orchestrator | 2025-09-19 00:30:46.387579 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-19 00:30:46.387592 | orchestrator | Friday 19 September 2025 00:30:21 +0000 (0:00:00.807) 0:07:23.485 ****** 2025-09-19 00:30:46.387604 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:46.387616 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:46.387628 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:46.387641 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:46.387653 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:46.387665 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:46.387677 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:46.387689 | orchestrator | 2025-09-19 00:30:46.387701 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-19 00:30:46.387713 | orchestrator | Friday 19 September 2025 00:30:21 +0000 (0:00:00.500) 0:07:23.986 ****** 2025-09-19 00:30:46.387734 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387747 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.387760 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.387772 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.387784 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.387796 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.387807 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.387817 | orchestrator | 2025-09-19 00:30:46.387828 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-19 00:30:46.387839 | orchestrator | Friday 19 September 2025 00:30:22 +0000 (0:00:00.701) 0:07:24.687 ****** 2025-09-19 00:30:46.387850 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387860 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.387871 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.387882 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.387892 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.387903 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.387914 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.387924 | orchestrator | 2025-09-19 00:30:46.387949 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-19 00:30:46.387960 | orchestrator | Friday 19 September 2025 00:30:22 +0000 (0:00:00.524) 0:07:25.212 ****** 2025-09-19 00:30:46.387971 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.387982 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.387993 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.388003 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.388014 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.388025 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.388035 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.388046 | orchestrator | 2025-09-19 00:30:46.388057 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-19 00:30:46.388068 | orchestrator | Friday 19 September 2025 00:30:23 +0000 (0:00:00.523) 0:07:25.736 ****** 2025-09-19 00:30:46.388079 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.388090 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.388100 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.388111 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.388121 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.388132 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.388143 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.388153 | orchestrator | 2025-09-19 00:30:46.388164 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-19 00:30:46.388193 | orchestrator | Friday 19 September 2025 00:30:28 +0000 (0:00:05.652) 0:07:31.388 ****** 2025-09-19 00:30:46.388205 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:30:46.388216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:30:46.388227 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:30:46.388237 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:30:46.388248 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:30:46.388259 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:30:46.388269 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:30:46.388280 | orchestrator | 2025-09-19 00:30:46.388290 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-19 00:30:46.388301 | orchestrator | Friday 19 September 2025 00:30:29 +0000 (0:00:00.540) 0:07:31.929 ****** 2025-09-19 00:30:46.388314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:30:46.388327 | orchestrator | 2025-09-19 00:30:46.388338 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-19 00:30:46.388349 | orchestrator | Friday 19 September 2025 00:30:30 +0000 (0:00:00.979) 0:07:32.908 ****** 2025-09-19 00:30:46.388360 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.388370 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.388388 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.388399 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.388410 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.388420 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.388431 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.388442 | orchestrator | 2025-09-19 00:30:46.388453 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-19 00:30:46.388464 | orchestrator | Friday 19 September 2025 00:30:32 +0000 (0:00:01.853) 0:07:34.762 ****** 2025-09-19 00:30:46.388475 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.388485 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.388496 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.388507 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.388549 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.388560 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.388571 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.388581 | orchestrator | 2025-09-19 00:30:46.388592 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-19 00:30:46.388603 | orchestrator | Friday 19 September 2025 00:30:33 +0000 (0:00:01.121) 0:07:35.883 ****** 2025-09-19 00:30:46.388614 | orchestrator | ok: [testbed-manager] 2025-09-19 00:30:46.388624 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:30:46.388635 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:30:46.388646 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:30:46.388656 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:30:46.388667 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:30:46.388677 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:30:46.388688 | orchestrator | 2025-09-19 00:30:46.388698 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-19 00:30:46.388709 | orchestrator | Friday 19 September 2025 00:30:34 +0000 (0:00:01.064) 0:07:36.948 ****** 2025-09-19 00:30:46.388720 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388733 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388744 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388755 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388765 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388776 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388787 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 00:30:46.388797 | orchestrator | 2025-09-19 00:30:46.388808 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-19 00:30:46.388819 | orchestrator | Friday 19 September 2025 00:30:36 +0000 (0:00:01.738) 0:07:38.686 ****** 2025-09-19 00:30:46.388830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:30:46.388841 | orchestrator | 2025-09-19 00:30:46.388852 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-19 00:30:46.388863 | orchestrator | Friday 19 September 2025 00:30:37 +0000 (0:00:00.792) 0:07:39.479 ****** 2025-09-19 00:30:46.388874 | orchestrator | changed: [testbed-manager] 2025-09-19 00:30:46.388885 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:30:46.388904 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:30:46.388915 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:30:46.388926 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:30:46.388936 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:30:46.388947 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:30:46.388958 | orchestrator | 2025-09-19 00:30:46.388969 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-19 00:30:46.388986 | orchestrator | Friday 19 September 2025 00:30:46 +0000 (0:00:09.320) 0:07:48.800 ****** 2025-09-19 00:31:02.461652 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:02.461765 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:02.461781 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:02.461793 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:02.461804 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:02.461815 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:02.461826 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:02.461837 | orchestrator | 2025-09-19 00:31:02.461850 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-19 00:31:02.461863 | orchestrator | Friday 19 September 2025 00:30:48 +0000 (0:00:01.781) 0:07:50.582 ****** 2025-09-19 00:31:02.461874 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:02.461885 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:02.461896 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:02.461906 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:02.461917 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:02.461928 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:02.461939 | orchestrator | 2025-09-19 00:31:02.461950 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-19 00:31:02.461961 | orchestrator | Friday 19 September 2025 00:30:49 +0000 (0:00:01.304) 0:07:51.887 ****** 2025-09-19 00:31:02.461972 | orchestrator | changed: [testbed-manager] 2025-09-19 00:31:02.461984 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:02.461995 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:02.462006 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:02.462077 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:02.462092 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:02.462103 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:02.462114 | orchestrator | 2025-09-19 00:31:02.462125 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-19 00:31:02.462136 | orchestrator | 2025-09-19 00:31:02.462148 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-19 00:31:02.462159 | orchestrator | Friday 19 September 2025 00:30:50 +0000 (0:00:01.492) 0:07:53.379 ****** 2025-09-19 00:31:02.462170 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:31:02.462181 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:31:02.462193 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:31:02.462204 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:31:02.462215 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:31:02.462226 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:31:02.463095 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:31:02.463116 | orchestrator | 2025-09-19 00:31:02.463130 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-19 00:31:02.463142 | orchestrator | 2025-09-19 00:31:02.463155 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-19 00:31:02.463168 | orchestrator | Friday 19 September 2025 00:30:51 +0000 (0:00:00.527) 0:07:53.907 ****** 2025-09-19 00:31:02.463181 | orchestrator | changed: [testbed-manager] 2025-09-19 00:31:02.463194 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:02.463207 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:02.463220 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:02.463234 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:02.463247 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:02.463260 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:02.463273 | orchestrator | 2025-09-19 00:31:02.463310 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-19 00:31:02.463322 | orchestrator | Friday 19 September 2025 00:30:52 +0000 (0:00:01.352) 0:07:55.260 ****** 2025-09-19 00:31:02.463333 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:02.463344 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:02.463354 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:02.463365 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:02.463376 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:02.463386 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:02.463444 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:02.463456 | orchestrator | 2025-09-19 00:31:02.463467 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-19 00:31:02.463479 | orchestrator | Friday 19 September 2025 00:30:54 +0000 (0:00:01.678) 0:07:56.938 ****** 2025-09-19 00:31:02.463489 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:31:02.463500 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:31:02.463529 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:31:02.463541 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:31:02.463551 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:31:02.463562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:31:02.463573 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:31:02.463583 | orchestrator | 2025-09-19 00:31:02.463595 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-19 00:31:02.463605 | orchestrator | Friday 19 September 2025 00:30:55 +0000 (0:00:00.791) 0:07:57.730 ****** 2025-09-19 00:31:02.463616 | orchestrator | changed: [testbed-manager] 2025-09-19 00:31:02.463627 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:02.463638 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:02.463648 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:02.463659 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:02.463675 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:02.463686 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:02.463697 | orchestrator | 2025-09-19 00:31:02.463707 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-19 00:31:02.463718 | orchestrator | 2025-09-19 00:31:02.463729 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-19 00:31:02.463740 | orchestrator | Friday 19 September 2025 00:30:56 +0000 (0:00:01.246) 0:07:58.976 ****** 2025-09-19 00:31:02.463751 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:31:02.463764 | orchestrator | 2025-09-19 00:31:02.463775 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 00:31:02.463786 | orchestrator | Friday 19 September 2025 00:30:57 +0000 (0:00:00.955) 0:07:59.932 ****** 2025-09-19 00:31:02.463796 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:02.463807 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:02.463818 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:02.463828 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:02.463839 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:02.463850 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:02.463860 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:02.463871 | orchestrator | 2025-09-19 00:31:02.463903 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 00:31:02.463914 | orchestrator | Friday 19 September 2025 00:30:58 +0000 (0:00:00.858) 0:08:00.790 ****** 2025-09-19 00:31:02.463925 | orchestrator | changed: [testbed-manager] 2025-09-19 00:31:02.463936 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:02.463946 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:02.463957 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:02.463968 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:02.464039 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:02.464051 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:02.464062 | orchestrator | 2025-09-19 00:31:02.464084 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-19 00:31:02.464095 | orchestrator | Friday 19 September 2025 00:30:59 +0000 (0:00:01.125) 0:08:01.916 ****** 2025-09-19 00:31:02.464106 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:31:02.464117 | orchestrator | 2025-09-19 00:31:02.464128 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 00:31:02.464139 | orchestrator | Friday 19 September 2025 00:31:00 +0000 (0:00:00.988) 0:08:02.904 ****** 2025-09-19 00:31:02.464150 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:02.464161 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:02.464172 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:02.464183 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:02.464194 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:02.464205 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:02.464216 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:02.464227 | orchestrator | 2025-09-19 00:31:02.464238 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 00:31:02.464248 | orchestrator | Friday 19 September 2025 00:31:01 +0000 (0:00:00.847) 0:08:03.752 ****** 2025-09-19 00:31:02.464259 | orchestrator | changed: [testbed-manager] 2025-09-19 00:31:02.464270 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:02.464281 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:02.464292 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:02.464302 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:02.464313 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:02.464324 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:02.464335 | orchestrator | 2025-09-19 00:31:02.464346 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:31:02.464358 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-19 00:31:02.464369 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-19 00:31:02.464381 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 00:31:02.464392 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 00:31:02.464403 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 00:31:02.464414 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 00:31:02.464425 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 00:31:02.464436 | orchestrator | 2025-09-19 00:31:02.464447 | orchestrator | 2025-09-19 00:31:02.464458 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:31:02.464469 | orchestrator | Friday 19 September 2025 00:31:02 +0000 (0:00:01.115) 0:08:04.868 ****** 2025-09-19 00:31:02.464480 | orchestrator | =============================================================================== 2025-09-19 00:31:02.464491 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.00s 2025-09-19 00:31:02.464517 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.78s 2025-09-19 00:31:02.464529 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.80s 2025-09-19 00:31:02.464540 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.93s 2025-09-19 00:31:02.464559 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.97s 2025-09-19 00:31:02.464571 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.85s 2025-09-19 00:31:02.464583 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.19s 2025-09-19 00:31:02.464594 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.94s 2025-09-19 00:31:02.464605 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.39s 2025-09-19 00:31:02.464615 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.32s 2025-09-19 00:31:02.464626 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.13s 2025-09-19 00:31:02.464637 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.64s 2025-09-19 00:31:02.464648 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.03s 2025-09-19 00:31:02.464659 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.00s 2025-09-19 00:31:02.464678 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.83s 2025-09-19 00:31:02.881609 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.46s 2025-09-19 00:31:02.881703 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.94s 2025-09-19 00:31:02.881717 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.96s 2025-09-19 00:31:02.881729 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.89s 2025-09-19 00:31:02.881740 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.67s 2025-09-19 00:31:03.167697 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 00:31:03.167763 | orchestrator | + osism apply network 2025-09-19 00:31:15.695304 | orchestrator | 2025-09-19 00:31:15 | INFO  | Task 5b338b21-0982-49fb-9e4d-c89c47bdf807 (network) was prepared for execution. 2025-09-19 00:31:15.695413 | orchestrator | 2025-09-19 00:31:15 | INFO  | It takes a moment until task 5b338b21-0982-49fb-9e4d-c89c47bdf807 (network) has been started and output is visible here. 2025-09-19 00:31:44.247412 | orchestrator | 2025-09-19 00:31:44.247625 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-19 00:31:44.248382 | orchestrator | 2025-09-19 00:31:44.248421 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-19 00:31:44.248442 | orchestrator | Friday 19 September 2025 00:31:19 +0000 (0:00:00.289) 0:00:00.289 ****** 2025-09-19 00:31:44.248461 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.248515 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.248533 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.248551 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.248569 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.248587 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.248605 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.248624 | orchestrator | 2025-09-19 00:31:44.248660 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-19 00:31:44.248693 | orchestrator | Friday 19 September 2025 00:31:20 +0000 (0:00:00.698) 0:00:00.988 ****** 2025-09-19 00:31:44.248715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:31:44.248737 | orchestrator | 2025-09-19 00:31:44.248754 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-19 00:31:44.248774 | orchestrator | Friday 19 September 2025 00:31:21 +0000 (0:00:01.166) 0:00:02.154 ****** 2025-09-19 00:31:44.248792 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.248811 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.248828 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.248846 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.248865 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.248916 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.248935 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.248953 | orchestrator | 2025-09-19 00:31:44.248970 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-19 00:31:44.248988 | orchestrator | Friday 19 September 2025 00:31:23 +0000 (0:00:02.063) 0:00:04.218 ****** 2025-09-19 00:31:44.249006 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.249024 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.249042 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.249060 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.249078 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.249095 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.249112 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.249129 | orchestrator | 2025-09-19 00:31:44.249148 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-19 00:31:44.249166 | orchestrator | Friday 19 September 2025 00:31:25 +0000 (0:00:01.831) 0:00:06.050 ****** 2025-09-19 00:31:44.249184 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-19 00:31:44.249203 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-19 00:31:44.249221 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-19 00:31:44.249240 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-19 00:31:44.249258 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-19 00:31:44.249276 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-19 00:31:44.249294 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-19 00:31:44.249312 | orchestrator | 2025-09-19 00:31:44.249329 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-19 00:31:44.249347 | orchestrator | Friday 19 September 2025 00:31:26 +0000 (0:00:00.955) 0:00:07.006 ****** 2025-09-19 00:31:44.249384 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:31:44.249404 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 00:31:44.249422 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 00:31:44.249440 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 00:31:44.249459 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 00:31:44.249502 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 00:31:44.249521 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 00:31:44.249538 | orchestrator | 2025-09-19 00:31:44.249556 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-19 00:31:44.249574 | orchestrator | Friday 19 September 2025 00:31:29 +0000 (0:00:03.312) 0:00:10.319 ****** 2025-09-19 00:31:44.249592 | orchestrator | changed: [testbed-manager] 2025-09-19 00:31:44.249611 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:44.249626 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:44.249637 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:44.249647 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:44.249658 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:44.249675 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:44.249693 | orchestrator | 2025-09-19 00:31:44.249711 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-19 00:31:44.249729 | orchestrator | Friday 19 September 2025 00:31:31 +0000 (0:00:01.435) 0:00:11.754 ****** 2025-09-19 00:31:44.249746 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:31:44.249763 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 00:31:44.249781 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 00:31:44.249799 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 00:31:44.249816 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 00:31:44.249835 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 00:31:44.249850 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 00:31:44.249865 | orchestrator | 2025-09-19 00:31:44.249883 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-19 00:31:44.249901 | orchestrator | Friday 19 September 2025 00:31:33 +0000 (0:00:01.851) 0:00:13.606 ****** 2025-09-19 00:31:44.249934 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.249952 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.249971 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.249989 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.250008 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.250080 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.250098 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.250116 | orchestrator | 2025-09-19 00:31:44.250134 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-19 00:31:44.250179 | orchestrator | Friday 19 September 2025 00:31:34 +0000 (0:00:01.086) 0:00:14.692 ****** 2025-09-19 00:31:44.250199 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:31:44.250218 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:31:44.250236 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:31:44.250251 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:31:44.250266 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:31:44.250282 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:31:44.250298 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:31:44.250314 | orchestrator | 2025-09-19 00:31:44.250330 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-19 00:31:44.250345 | orchestrator | Friday 19 September 2025 00:31:35 +0000 (0:00:00.649) 0:00:15.342 ****** 2025-09-19 00:31:44.250360 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.250376 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.250391 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.250408 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.250423 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.250439 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.250455 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.250470 | orchestrator | 2025-09-19 00:31:44.250512 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-19 00:31:44.250528 | orchestrator | Friday 19 September 2025 00:31:37 +0000 (0:00:02.264) 0:00:17.607 ****** 2025-09-19 00:31:44.250543 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:31:44.250558 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:31:44.250575 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:31:44.250592 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:31:44.250607 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:31:44.250623 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:31:44.250640 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-19 00:31:44.250658 | orchestrator | 2025-09-19 00:31:44.250673 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-19 00:31:44.250689 | orchestrator | Friday 19 September 2025 00:31:38 +0000 (0:00:00.878) 0:00:18.485 ****** 2025-09-19 00:31:44.250706 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.250723 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:31:44.250740 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:31:44.250755 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:31:44.250789 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:31:44.250806 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:31:44.250822 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:31:44.250840 | orchestrator | 2025-09-19 00:31:44.250859 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-19 00:31:44.250876 | orchestrator | Friday 19 September 2025 00:31:39 +0000 (0:00:01.654) 0:00:20.140 ****** 2025-09-19 00:31:44.250894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:31:44.250913 | orchestrator | 2025-09-19 00:31:44.250929 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 00:31:44.250958 | orchestrator | Friday 19 September 2025 00:31:41 +0000 (0:00:01.309) 0:00:21.449 ****** 2025-09-19 00:31:44.250975 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.250994 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.251012 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.251030 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.251047 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.251071 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.251088 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.251106 | orchestrator | 2025-09-19 00:31:44.251123 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-19 00:31:44.251141 | orchestrator | Friday 19 September 2025 00:31:42 +0000 (0:00:00.988) 0:00:22.437 ****** 2025-09-19 00:31:44.251158 | orchestrator | ok: [testbed-manager] 2025-09-19 00:31:44.251176 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:31:44.251192 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:31:44.251208 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:31:44.251224 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:31:44.251241 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:31:44.251258 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:31:44.251275 | orchestrator | 2025-09-19 00:31:44.251293 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 00:31:44.251309 | orchestrator | Friday 19 September 2025 00:31:42 +0000 (0:00:00.810) 0:00:23.248 ****** 2025-09-19 00:31:44.251326 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251342 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251360 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251378 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251395 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251413 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251429 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251447 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251466 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251552 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 00:31:44.251569 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251584 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251600 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251616 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 00:31:44.251633 | orchestrator | 2025-09-19 00:31:44.251661 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-19 00:31:59.921346 | orchestrator | Friday 19 September 2025 00:31:44 +0000 (0:00:01.306) 0:00:24.554 ****** 2025-09-19 00:31:59.921528 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:31:59.921550 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:31:59.921562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:31:59.921574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:31:59.921585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:31:59.921596 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:31:59.921607 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:31:59.921618 | orchestrator | 2025-09-19 00:31:59.921630 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-19 00:31:59.921641 | orchestrator | Friday 19 September 2025 00:31:44 +0000 (0:00:00.667) 0:00:25.221 ****** 2025-09-19 00:31:59.921655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-2, testbed-node-0, testbed-manager, testbed-node-4, testbed-node-5, testbed-node-3 2025-09-19 00:31:59.921692 | orchestrator | 2025-09-19 00:31:59.921704 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-19 00:31:59.921715 | orchestrator | Friday 19 September 2025 00:31:49 +0000 (0:00:04.512) 0:00:29.734 ****** 2025-09-19 00:31:59.921728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921752 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.921827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921846 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.921949 | orchestrator | 2025-09-19 00:31:59.921962 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-19 00:31:59.921974 | orchestrator | Friday 19 September 2025 00:31:54 +0000 (0:00:05.012) 0:00:34.747 ****** 2025-09-19 00:31:59.921987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922000 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922113 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.922126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.922152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.922165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 00:31:59.922179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.922191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:31:59.922224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:32:06.079328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 00:32:06.079435 | orchestrator | 2025-09-19 00:32:06.079452 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-19 00:32:06.079532 | orchestrator | Friday 19 September 2025 00:31:59 +0000 (0:00:05.488) 0:00:40.236 ****** 2025-09-19 00:32:06.079547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:32:06.079559 | orchestrator | 2025-09-19 00:32:06.079572 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 00:32:06.079591 | orchestrator | Friday 19 September 2025 00:32:01 +0000 (0:00:01.266) 0:00:41.502 ****** 2025-09-19 00:32:06.079610 | orchestrator | ok: [testbed-manager] 2025-09-19 00:32:06.079629 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:32:06.079670 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:32:06.079690 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:32:06.079707 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:32:06.079726 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:32:06.079743 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:32:06.079761 | orchestrator | 2025-09-19 00:32:06.079777 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 00:32:06.079795 | orchestrator | Friday 19 September 2025 00:32:02 +0000 (0:00:01.157) 0:00:42.660 ****** 2025-09-19 00:32:06.079813 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.079832 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.079851 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.079869 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.079887 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:32:06.079909 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.079929 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.079947 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.079967 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.079987 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:32:06.080008 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.080039 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.080054 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.080067 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.080080 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:32:06.080093 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.080106 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.080118 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.080157 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.080170 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:32:06.080183 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.080196 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.080210 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.080224 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.080235 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:32:06.080246 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.080257 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.080268 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.080279 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.080289 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:32:06.080300 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 00:32:06.080311 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 00:32:06.080322 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 00:32:06.080333 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 00:32:06.080344 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:32:06.080355 | orchestrator | 2025-09-19 00:32:06.080366 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-19 00:32:06.080397 | orchestrator | Friday 19 September 2025 00:32:04 +0000 (0:00:01.970) 0:00:44.630 ****** 2025-09-19 00:32:06.080408 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:32:06.080419 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:32:06.080430 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:32:06.080441 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:32:06.080452 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:32:06.080492 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:32:06.080503 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:32:06.080514 | orchestrator | 2025-09-19 00:32:06.080525 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-19 00:32:06.080536 | orchestrator | Friday 19 September 2025 00:32:04 +0000 (0:00:00.688) 0:00:45.318 ****** 2025-09-19 00:32:06.080547 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:32:06.080558 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:32:06.080569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:32:06.080579 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:32:06.080590 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:32:06.080602 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:32:06.080612 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:32:06.080623 | orchestrator | 2025-09-19 00:32:06.080635 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:32:06.080647 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:32:06.080660 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:32:06.080671 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:32:06.080682 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:32:06.080703 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:32:06.080714 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:32:06.080725 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:32:06.080736 | orchestrator | 2025-09-19 00:32:06.080747 | orchestrator | 2025-09-19 00:32:06.080758 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:32:06.080769 | orchestrator | Friday 19 September 2025 00:32:05 +0000 (0:00:00.702) 0:00:46.021 ****** 2025-09-19 00:32:06.080786 | orchestrator | =============================================================================== 2025-09-19 00:32:06.080797 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.49s 2025-09-19 00:32:06.080808 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.01s 2025-09-19 00:32:06.080818 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.51s 2025-09-19 00:32:06.080829 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2025-09-19 00:32:06.080840 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2025-09-19 00:32:06.080851 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.06s 2025-09-19 00:32:06.080862 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.97s 2025-09-19 00:32:06.080873 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.85s 2025-09-19 00:32:06.080883 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2025-09-19 00:32:06.080894 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2025-09-19 00:32:06.080905 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-09-19 00:32:06.080916 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-09-19 00:32:06.080927 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.31s 2025-09-19 00:32:06.080938 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2025-09-19 00:32:06.080948 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2025-09-19 00:32:06.080959 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-09-19 00:32:06.080970 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-09-19 00:32:06.080981 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-09-19 00:32:06.080992 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-09-19 00:32:06.081002 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-09-19 00:32:06.347048 | orchestrator | + osism apply wireguard 2025-09-19 00:32:18.374389 | orchestrator | 2025-09-19 00:32:18 | INFO  | Task 53c01220-8117-494c-b24c-1dbefe1e3ed2 (wireguard) was prepared for execution. 2025-09-19 00:32:18.374517 | orchestrator | 2025-09-19 00:32:18 | INFO  | It takes a moment until task 53c01220-8117-494c-b24c-1dbefe1e3ed2 (wireguard) has been started and output is visible here. 2025-09-19 00:32:38.046858 | orchestrator | 2025-09-19 00:32:38.046977 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-19 00:32:38.046995 | orchestrator | 2025-09-19 00:32:38.047007 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-19 00:32:38.047019 | orchestrator | Friday 19 September 2025 00:32:22 +0000 (0:00:00.226) 0:00:00.226 ****** 2025-09-19 00:32:38.047030 | orchestrator | ok: [testbed-manager] 2025-09-19 00:32:38.047070 | orchestrator | 2025-09-19 00:32:38.047082 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-19 00:32:38.047093 | orchestrator | Friday 19 September 2025 00:32:24 +0000 (0:00:01.571) 0:00:01.797 ****** 2025-09-19 00:32:38.047104 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047115 | orchestrator | 2025-09-19 00:32:38.047127 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-19 00:32:38.047145 | orchestrator | Friday 19 September 2025 00:32:30 +0000 (0:00:06.513) 0:00:08.311 ****** 2025-09-19 00:32:38.047166 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047185 | orchestrator | 2025-09-19 00:32:38.047205 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-19 00:32:38.047222 | orchestrator | Friday 19 September 2025 00:32:31 +0000 (0:00:00.564) 0:00:08.875 ****** 2025-09-19 00:32:38.047241 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047261 | orchestrator | 2025-09-19 00:32:38.047283 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-19 00:32:38.047301 | orchestrator | Friday 19 September 2025 00:32:31 +0000 (0:00:00.411) 0:00:09.286 ****** 2025-09-19 00:32:38.047319 | orchestrator | ok: [testbed-manager] 2025-09-19 00:32:38.047330 | orchestrator | 2025-09-19 00:32:38.047342 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-19 00:32:38.047353 | orchestrator | Friday 19 September 2025 00:32:32 +0000 (0:00:00.515) 0:00:09.802 ****** 2025-09-19 00:32:38.047363 | orchestrator | ok: [testbed-manager] 2025-09-19 00:32:38.047374 | orchestrator | 2025-09-19 00:32:38.047385 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-19 00:32:38.047397 | orchestrator | Friday 19 September 2025 00:32:32 +0000 (0:00:00.538) 0:00:10.341 ****** 2025-09-19 00:32:38.047410 | orchestrator | ok: [testbed-manager] 2025-09-19 00:32:38.047422 | orchestrator | 2025-09-19 00:32:38.047469 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-19 00:32:38.047486 | orchestrator | Friday 19 September 2025 00:32:33 +0000 (0:00:00.423) 0:00:10.764 ****** 2025-09-19 00:32:38.047505 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047523 | orchestrator | 2025-09-19 00:32:38.047550 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-19 00:32:38.047572 | orchestrator | Friday 19 September 2025 00:32:34 +0000 (0:00:01.193) 0:00:11.957 ****** 2025-09-19 00:32:38.047590 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 00:32:38.047608 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047629 | orchestrator | 2025-09-19 00:32:38.047648 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-19 00:32:38.047665 | orchestrator | Friday 19 September 2025 00:32:35 +0000 (0:00:00.923) 0:00:12.881 ****** 2025-09-19 00:32:38.047678 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047690 | orchestrator | 2025-09-19 00:32:38.047721 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-19 00:32:38.047735 | orchestrator | Friday 19 September 2025 00:32:36 +0000 (0:00:01.641) 0:00:14.522 ****** 2025-09-19 00:32:38.047748 | orchestrator | changed: [testbed-manager] 2025-09-19 00:32:38.047761 | orchestrator | 2025-09-19 00:32:38.047773 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:32:38.047785 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:32:38.047798 | orchestrator | 2025-09-19 00:32:38.047809 | orchestrator | 2025-09-19 00:32:38.047820 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:32:38.047831 | orchestrator | Friday 19 September 2025 00:32:37 +0000 (0:00:00.964) 0:00:15.487 ****** 2025-09-19 00:32:38.047842 | orchestrator | =============================================================================== 2025-09-19 00:32:38.047853 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.51s 2025-09-19 00:32:38.047864 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2025-09-19 00:32:38.047885 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.57s 2025-09-19 00:32:38.047897 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-19 00:32:38.047907 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-19 00:32:38.047918 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-09-19 00:32:38.047929 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-19 00:32:38.047940 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.54s 2025-09-19 00:32:38.047951 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-09-19 00:32:38.047962 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-19 00:32:38.047973 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-09-19 00:32:38.302331 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-19 00:32:38.331654 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-19 00:32:38.331744 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-19 00:32:38.407993 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 183 0 --:--:-- --:--:-- --:--:-- 184 2025-09-19 00:32:38.422066 | orchestrator | + osism apply --environment custom workarounds 2025-09-19 00:32:40.205989 | orchestrator | 2025-09-19 00:32:40 | INFO  | Trying to run play workarounds in environment custom 2025-09-19 00:32:50.304696 | orchestrator | 2025-09-19 00:32:50 | INFO  | Task 2b84df93-f099-4241-8c39-6742b122e313 (workarounds) was prepared for execution. 2025-09-19 00:32:50.304802 | orchestrator | 2025-09-19 00:32:50 | INFO  | It takes a moment until task 2b84df93-f099-4241-8c39-6742b122e313 (workarounds) has been started and output is visible here. 2025-09-19 00:33:15.584032 | orchestrator | 2025-09-19 00:33:15.584144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:33:15.584158 | orchestrator | 2025-09-19 00:33:15.584170 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-19 00:33:15.584181 | orchestrator | Friday 19 September 2025 00:32:54 +0000 (0:00:00.156) 0:00:00.156 ****** 2025-09-19 00:33:15.584191 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584201 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584211 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584220 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584230 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584239 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584249 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-19 00:33:15.584258 | orchestrator | 2025-09-19 00:33:15.584268 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-19 00:33:15.584278 | orchestrator | 2025-09-19 00:33:15.584287 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 00:33:15.584297 | orchestrator | Friday 19 September 2025 00:32:54 +0000 (0:00:00.747) 0:00:00.903 ****** 2025-09-19 00:33:15.584306 | orchestrator | ok: [testbed-manager] 2025-09-19 00:33:15.584317 | orchestrator | 2025-09-19 00:33:15.584327 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-19 00:33:15.584337 | orchestrator | 2025-09-19 00:33:15.584346 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 00:33:15.584356 | orchestrator | Friday 19 September 2025 00:32:57 +0000 (0:00:02.379) 0:00:03.282 ****** 2025-09-19 00:33:15.584391 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:33:15.584484 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:33:15.584496 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:33:15.584506 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:33:15.584515 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:33:15.584524 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:33:15.584534 | orchestrator | 2025-09-19 00:33:15.584543 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-19 00:33:15.584553 | orchestrator | 2025-09-19 00:33:15.584562 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-19 00:33:15.584586 | orchestrator | Friday 19 September 2025 00:32:59 +0000 (0:00:01.811) 0:00:05.094 ****** 2025-09-19 00:33:15.584599 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 00:33:15.584612 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 00:33:15.584622 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 00:33:15.584633 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 00:33:15.584644 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 00:33:15.584655 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 00:33:15.584666 | orchestrator | 2025-09-19 00:33:15.584676 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-19 00:33:15.584687 | orchestrator | Friday 19 September 2025 00:33:00 +0000 (0:00:01.586) 0:00:06.680 ****** 2025-09-19 00:33:15.584698 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:33:15.584709 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:33:15.584721 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:33:15.584732 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:33:15.584742 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:33:15.584753 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:33:15.584763 | orchestrator | 2025-09-19 00:33:15.584774 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-19 00:33:15.584785 | orchestrator | Friday 19 September 2025 00:33:04 +0000 (0:00:03.797) 0:00:10.478 ****** 2025-09-19 00:33:15.584796 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:33:15.584806 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:33:15.584817 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:33:15.584828 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:33:15.584839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:33:15.584850 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:33:15.584860 | orchestrator | 2025-09-19 00:33:15.584871 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-19 00:33:15.584882 | orchestrator | 2025-09-19 00:33:15.584893 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-19 00:33:15.584903 | orchestrator | Friday 19 September 2025 00:33:05 +0000 (0:00:00.706) 0:00:11.185 ****** 2025-09-19 00:33:15.584915 | orchestrator | changed: [testbed-manager] 2025-09-19 00:33:15.584925 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:33:15.584937 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:33:15.584946 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:33:15.584955 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:33:15.584965 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:33:15.584974 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:33:15.584983 | orchestrator | 2025-09-19 00:33:15.584993 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-19 00:33:15.585002 | orchestrator | Friday 19 September 2025 00:33:06 +0000 (0:00:01.612) 0:00:12.797 ****** 2025-09-19 00:33:15.585012 | orchestrator | changed: [testbed-manager] 2025-09-19 00:33:15.585030 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:33:15.585040 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:33:15.585049 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:33:15.585058 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:33:15.585068 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:33:15.585094 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:33:15.585104 | orchestrator | 2025-09-19 00:33:15.585113 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-19 00:33:15.585123 | orchestrator | Friday 19 September 2025 00:33:08 +0000 (0:00:01.654) 0:00:14.452 ****** 2025-09-19 00:33:15.585133 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:33:15.585142 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:33:15.585152 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:33:15.585161 | orchestrator | ok: [testbed-manager] 2025-09-19 00:33:15.585171 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:33:15.585180 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:33:15.585190 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:33:15.585199 | orchestrator | 2025-09-19 00:33:15.585209 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-19 00:33:15.585219 | orchestrator | Friday 19 September 2025 00:33:09 +0000 (0:00:01.508) 0:00:15.961 ****** 2025-09-19 00:33:15.585228 | orchestrator | changed: [testbed-manager] 2025-09-19 00:33:15.585238 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:33:15.585248 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:33:15.585257 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:33:15.585267 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:33:15.585276 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:33:15.585285 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:33:15.585295 | orchestrator | 2025-09-19 00:33:15.585305 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-19 00:33:15.585314 | orchestrator | Friday 19 September 2025 00:33:11 +0000 (0:00:01.703) 0:00:17.664 ****** 2025-09-19 00:33:15.585324 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:33:15.585333 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:33:15.585343 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:33:15.585352 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:33:15.585361 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:33:15.585371 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:33:15.585380 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:33:15.585389 | orchestrator | 2025-09-19 00:33:15.585424 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-19 00:33:15.585436 | orchestrator | 2025-09-19 00:33:15.585445 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-19 00:33:15.585455 | orchestrator | Friday 19 September 2025 00:33:12 +0000 (0:00:00.632) 0:00:18.296 ****** 2025-09-19 00:33:15.585464 | orchestrator | ok: [testbed-manager] 2025-09-19 00:33:15.585474 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:33:15.585483 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:33:15.585493 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:33:15.585502 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:33:15.585511 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:33:15.585526 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:33:15.585536 | orchestrator | 2025-09-19 00:33:15.585545 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:33:15.585556 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:33:15.585566 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:15.585576 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:15.585586 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:15.585602 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:15.585612 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:15.585622 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:15.585631 | orchestrator | 2025-09-19 00:33:15.585641 | orchestrator | 2025-09-19 00:33:15.585650 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:33:15.585660 | orchestrator | Friday 19 September 2025 00:33:15 +0000 (0:00:03.227) 0:00:21.524 ****** 2025-09-19 00:33:15.585669 | orchestrator | =============================================================================== 2025-09-19 00:33:15.585679 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2025-09-19 00:33:15.585688 | orchestrator | Install python3-docker -------------------------------------------------- 3.23s 2025-09-19 00:33:15.585698 | orchestrator | Apply netplan configuration --------------------------------------------- 2.38s 2025-09-19 00:33:15.585707 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2025-09-19 00:33:15.585717 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.70s 2025-09-19 00:33:15.585726 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2025-09-19 00:33:15.585735 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-09-19 00:33:15.585745 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.59s 2025-09-19 00:33:15.585754 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-09-19 00:33:15.585764 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2025-09-19 00:33:15.585773 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-09-19 00:33:15.585789 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-19 00:33:16.189785 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-19 00:33:28.078943 | orchestrator | 2025-09-19 00:33:28 | INFO  | Task 353bb929-fb6d-446d-b6bb-5ae66ff74671 (reboot) was prepared for execution. 2025-09-19 00:33:28.079055 | orchestrator | 2025-09-19 00:33:28 | INFO  | It takes a moment until task 353bb929-fb6d-446d-b6bb-5ae66ff74671 (reboot) has been started and output is visible here. 2025-09-19 00:33:37.952717 | orchestrator | 2025-09-19 00:33:37.952828 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 00:33:37.952844 | orchestrator | 2025-09-19 00:33:37.952856 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 00:33:37.952868 | orchestrator | Friday 19 September 2025 00:33:32 +0000 (0:00:00.214) 0:00:00.214 ****** 2025-09-19 00:33:37.952879 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:33:37.952891 | orchestrator | 2025-09-19 00:33:37.952902 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 00:33:37.952913 | orchestrator | Friday 19 September 2025 00:33:32 +0000 (0:00:00.098) 0:00:00.313 ****** 2025-09-19 00:33:37.952924 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:33:37.952935 | orchestrator | 2025-09-19 00:33:37.952946 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 00:33:37.952958 | orchestrator | Friday 19 September 2025 00:33:33 +0000 (0:00:01.025) 0:00:01.338 ****** 2025-09-19 00:33:37.952968 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:33:37.952979 | orchestrator | 2025-09-19 00:33:37.952990 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 00:33:37.953026 | orchestrator | 2025-09-19 00:33:37.953038 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 00:33:37.953049 | orchestrator | Friday 19 September 2025 00:33:33 +0000 (0:00:00.104) 0:00:01.443 ****** 2025-09-19 00:33:37.953059 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:33:37.953071 | orchestrator | 2025-09-19 00:33:37.953081 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 00:33:37.953092 | orchestrator | Friday 19 September 2025 00:33:33 +0000 (0:00:00.112) 0:00:01.555 ****** 2025-09-19 00:33:37.953103 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:33:37.953114 | orchestrator | 2025-09-19 00:33:37.953125 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 00:33:37.953149 | orchestrator | Friday 19 September 2025 00:33:34 +0000 (0:00:00.651) 0:00:02.207 ****** 2025-09-19 00:33:37.953161 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:33:37.953171 | orchestrator | 2025-09-19 00:33:37.953182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 00:33:37.953193 | orchestrator | 2025-09-19 00:33:37.953204 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 00:33:37.953215 | orchestrator | Friday 19 September 2025 00:33:34 +0000 (0:00:00.112) 0:00:02.319 ****** 2025-09-19 00:33:37.953226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:33:37.953236 | orchestrator | 2025-09-19 00:33:37.953247 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 00:33:37.953260 | orchestrator | Friday 19 September 2025 00:33:34 +0000 (0:00:00.213) 0:00:02.533 ****** 2025-09-19 00:33:37.953272 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:33:37.953284 | orchestrator | 2025-09-19 00:33:37.953301 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 00:33:37.953314 | orchestrator | Friday 19 September 2025 00:33:34 +0000 (0:00:00.645) 0:00:03.179 ****** 2025-09-19 00:33:37.953327 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:33:37.953340 | orchestrator | 2025-09-19 00:33:37.953353 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 00:33:37.953365 | orchestrator | 2025-09-19 00:33:37.953377 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 00:33:37.953415 | orchestrator | Friday 19 September 2025 00:33:35 +0000 (0:00:00.125) 0:00:03.305 ****** 2025-09-19 00:33:37.953426 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:33:37.953437 | orchestrator | 2025-09-19 00:33:37.953448 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 00:33:37.953459 | orchestrator | Friday 19 September 2025 00:33:35 +0000 (0:00:00.102) 0:00:03.407 ****** 2025-09-19 00:33:37.953470 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:33:37.953481 | orchestrator | 2025-09-19 00:33:37.953492 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 00:33:37.953503 | orchestrator | Friday 19 September 2025 00:33:35 +0000 (0:00:00.607) 0:00:04.014 ****** 2025-09-19 00:33:37.953514 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:33:37.953525 | orchestrator | 2025-09-19 00:33:37.953536 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 00:33:37.953547 | orchestrator | 2025-09-19 00:33:37.953558 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 00:33:37.953569 | orchestrator | Friday 19 September 2025 00:33:35 +0000 (0:00:00.120) 0:00:04.135 ****** 2025-09-19 00:33:37.953580 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:33:37.953591 | orchestrator | 2025-09-19 00:33:37.953602 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 00:33:37.953613 | orchestrator | Friday 19 September 2025 00:33:36 +0000 (0:00:00.123) 0:00:04.258 ****** 2025-09-19 00:33:37.953624 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:33:37.953635 | orchestrator | 2025-09-19 00:33:37.953646 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 00:33:37.953657 | orchestrator | Friday 19 September 2025 00:33:36 +0000 (0:00:00.652) 0:00:04.911 ****** 2025-09-19 00:33:37.953678 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:33:37.953690 | orchestrator | 2025-09-19 00:33:37.953701 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 00:33:37.953712 | orchestrator | 2025-09-19 00:33:37.953723 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 00:33:37.953734 | orchestrator | Friday 19 September 2025 00:33:36 +0000 (0:00:00.110) 0:00:05.021 ****** 2025-09-19 00:33:37.953745 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:33:37.953756 | orchestrator | 2025-09-19 00:33:37.953767 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 00:33:37.953779 | orchestrator | Friday 19 September 2025 00:33:36 +0000 (0:00:00.100) 0:00:05.122 ****** 2025-09-19 00:33:37.953790 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:33:37.953800 | orchestrator | 2025-09-19 00:33:37.953812 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 00:33:37.953823 | orchestrator | Friday 19 September 2025 00:33:37 +0000 (0:00:00.663) 0:00:05.785 ****** 2025-09-19 00:33:37.953850 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:33:37.953861 | orchestrator | 2025-09-19 00:33:37.953872 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:33:37.953884 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:37.953897 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:37.953908 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:37.953919 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:37.953929 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:37.953940 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:33:37.953951 | orchestrator | 2025-09-19 00:33:37.953962 | orchestrator | 2025-09-19 00:33:37.953973 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:33:37.953984 | orchestrator | Friday 19 September 2025 00:33:37 +0000 (0:00:00.042) 0:00:05.828 ****** 2025-09-19 00:33:37.953995 | orchestrator | =============================================================================== 2025-09-19 00:33:37.954006 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2025-09-19 00:33:37.954074 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-09-19 00:33:37.954087 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-09-19 00:33:38.226793 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-19 00:33:50.197011 | orchestrator | 2025-09-19 00:33:50 | INFO  | Task 35fd0916-a2a9-45fa-8421-7b41044e6890 (wait-for-connection) was prepared for execution. 2025-09-19 00:33:50.197122 | orchestrator | 2025-09-19 00:33:50 | INFO  | It takes a moment until task 35fd0916-a2a9-45fa-8421-7b41044e6890 (wait-for-connection) has been started and output is visible here. 2025-09-19 00:34:06.053897 | orchestrator | 2025-09-19 00:34:06.054093 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-19 00:34:06.054112 | orchestrator | 2025-09-19 00:34:06.054125 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-19 00:34:06.054136 | orchestrator | Friday 19 September 2025 00:33:54 +0000 (0:00:00.243) 0:00:00.243 ****** 2025-09-19 00:34:06.054174 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:34:06.054187 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:34:06.054198 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:34:06.054209 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:34:06.054219 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:34:06.054230 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:34:06.054241 | orchestrator | 2025-09-19 00:34:06.054252 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:34:06.054264 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:06.054293 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:06.054305 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:06.054316 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:06.054327 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:06.054338 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:06.054349 | orchestrator | 2025-09-19 00:34:06.054360 | orchestrator | 2025-09-19 00:34:06.054401 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:34:06.054413 | orchestrator | Friday 19 September 2025 00:34:05 +0000 (0:00:11.498) 0:00:11.742 ****** 2025-09-19 00:34:06.054424 | orchestrator | =============================================================================== 2025-09-19 00:34:06.054435 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.50s 2025-09-19 00:34:06.307786 | orchestrator | + osism apply hddtemp 2025-09-19 00:34:18.212679 | orchestrator | 2025-09-19 00:34:18 | INFO  | Task d6dfee23-9d35-44b6-bf6d-1e7896b4fbeb (hddtemp) was prepared for execution. 2025-09-19 00:34:18.212789 | orchestrator | 2025-09-19 00:34:18 | INFO  | It takes a moment until task d6dfee23-9d35-44b6-bf6d-1e7896b4fbeb (hddtemp) has been started and output is visible here. 2025-09-19 00:34:45.811221 | orchestrator | 2025-09-19 00:34:45.811383 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-19 00:34:45.811401 | orchestrator | 2025-09-19 00:34:45.811413 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-19 00:34:45.811424 | orchestrator | Friday 19 September 2025 00:34:22 +0000 (0:00:00.266) 0:00:00.266 ****** 2025-09-19 00:34:45.811436 | orchestrator | ok: [testbed-manager] 2025-09-19 00:34:45.811448 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:34:45.811459 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:34:45.811470 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:34:45.811480 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:34:45.811491 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:34:45.811502 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:34:45.811512 | orchestrator | 2025-09-19 00:34:45.811523 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-19 00:34:45.811534 | orchestrator | Friday 19 September 2025 00:34:22 +0000 (0:00:00.685) 0:00:00.952 ****** 2025-09-19 00:34:45.811548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:34:45.811561 | orchestrator | 2025-09-19 00:34:45.811572 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-19 00:34:45.811583 | orchestrator | Friday 19 September 2025 00:34:24 +0000 (0:00:01.164) 0:00:02.116 ****** 2025-09-19 00:34:45.811594 | orchestrator | ok: [testbed-manager] 2025-09-19 00:34:45.811630 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:34:45.811641 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:34:45.811651 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:34:45.811662 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:34:45.811673 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:34:45.811684 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:34:45.811695 | orchestrator | 2025-09-19 00:34:45.811706 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-19 00:34:45.811732 | orchestrator | Friday 19 September 2025 00:34:25 +0000 (0:00:01.915) 0:00:04.032 ****** 2025-09-19 00:34:45.811743 | orchestrator | changed: [testbed-manager] 2025-09-19 00:34:45.811755 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:34:45.811765 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:34:45.811776 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:34:45.811787 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:34:45.811797 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:34:45.811808 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:34:45.811819 | orchestrator | 2025-09-19 00:34:45.811829 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-19 00:34:45.811840 | orchestrator | Friday 19 September 2025 00:34:27 +0000 (0:00:01.147) 0:00:05.180 ****** 2025-09-19 00:34:45.811851 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:34:45.811862 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:34:45.811872 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:34:45.811883 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:34:45.811894 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:34:45.811904 | orchestrator | ok: [testbed-manager] 2025-09-19 00:34:45.811915 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:34:45.811926 | orchestrator | 2025-09-19 00:34:45.811937 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-19 00:34:45.811947 | orchestrator | Friday 19 September 2025 00:34:28 +0000 (0:00:01.132) 0:00:06.312 ****** 2025-09-19 00:34:45.811958 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:34:45.811969 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:34:45.811980 | orchestrator | changed: [testbed-manager] 2025-09-19 00:34:45.811990 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:34:45.812001 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:34:45.812012 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:34:45.812022 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:34:45.812033 | orchestrator | 2025-09-19 00:34:45.812044 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-19 00:34:45.812054 | orchestrator | Friday 19 September 2025 00:34:29 +0000 (0:00:00.817) 0:00:07.129 ****** 2025-09-19 00:34:45.812065 | orchestrator | changed: [testbed-manager] 2025-09-19 00:34:45.812076 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:34:45.812087 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:34:45.812097 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:34:45.812108 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:34:45.812118 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:34:45.812129 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:34:45.812140 | orchestrator | 2025-09-19 00:34:45.812150 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-19 00:34:45.812161 | orchestrator | Friday 19 September 2025 00:34:42 +0000 (0:00:13.032) 0:00:20.162 ****** 2025-09-19 00:34:45.812172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:34:45.812183 | orchestrator | 2025-09-19 00:34:45.812194 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-19 00:34:45.812205 | orchestrator | Friday 19 September 2025 00:34:43 +0000 (0:00:01.353) 0:00:21.515 ****** 2025-09-19 00:34:45.812216 | orchestrator | changed: [testbed-manager] 2025-09-19 00:34:45.812227 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:34:45.812248 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:34:45.812258 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:34:45.812270 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:34:45.812280 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:34:45.812291 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:34:45.812302 | orchestrator | 2025-09-19 00:34:45.812312 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:34:45.812324 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:34:45.812372 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:34:45.812385 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:34:45.812396 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:34:45.812407 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:34:45.812418 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:34:45.812429 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:34:45.812440 | orchestrator | 2025-09-19 00:34:45.812450 | orchestrator | 2025-09-19 00:34:45.812461 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:34:45.812472 | orchestrator | Friday 19 September 2025 00:34:45 +0000 (0:00:02.004) 0:00:23.520 ****** 2025-09-19 00:34:45.812483 | orchestrator | =============================================================================== 2025-09-19 00:34:45.812493 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.03s 2025-09-19 00:34:45.812504 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.01s 2025-09-19 00:34:45.812515 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-09-19 00:34:45.812531 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2025-09-19 00:34:45.812542 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2025-09-19 00:34:45.812553 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.15s 2025-09-19 00:34:45.812564 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.13s 2025-09-19 00:34:45.812574 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-09-19 00:34:45.812585 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-09-19 00:34:46.135086 | orchestrator | ++ semver 9.2.0 7.1.1 2025-09-19 00:34:46.185478 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 00:34:46.185574 | orchestrator | + sudo systemctl restart manager.service 2025-09-19 00:34:59.650949 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 00:34:59.651063 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 00:34:59.651081 | orchestrator | + local max_attempts=60 2025-09-19 00:34:59.651094 | orchestrator | + local name=ceph-ansible 2025-09-19 00:34:59.651106 | orchestrator | + local attempt_num=1 2025-09-19 00:34:59.651118 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:34:59.690742 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:34:59.690826 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:34:59.690842 | orchestrator | + sleep 5 2025-09-19 00:35:04.696533 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:04.720580 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:04.720699 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:04.720727 | orchestrator | + sleep 5 2025-09-19 00:35:09.722549 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:09.758683 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:09.758756 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:09.758770 | orchestrator | + sleep 5 2025-09-19 00:35:14.764000 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:14.801824 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:14.801897 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:14.801911 | orchestrator | + sleep 5 2025-09-19 00:35:19.806610 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:19.851463 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:19.851579 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:19.851605 | orchestrator | + sleep 5 2025-09-19 00:35:24.856637 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:24.897375 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:24.897470 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:24.897485 | orchestrator | + sleep 5 2025-09-19 00:35:29.902773 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:29.947975 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:29.948056 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:29.948070 | orchestrator | + sleep 5 2025-09-19 00:35:34.955437 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:34.981722 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:34.981790 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:34.981796 | orchestrator | + sleep 5 2025-09-19 00:35:39.986125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:40.027546 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:40.027650 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:40.027676 | orchestrator | + sleep 5 2025-09-19 00:35:45.029517 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:45.066440 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:45.066523 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:45.066539 | orchestrator | + sleep 5 2025-09-19 00:35:50.071062 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:50.112233 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:50.112361 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:50.112391 | orchestrator | + sleep 5 2025-09-19 00:35:55.117774 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:35:55.164271 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 00:35:55.164404 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:35:55.164418 | orchestrator | + sleep 5 2025-09-19 00:36:00.168641 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:36:00.204278 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 00:36:00.204390 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 00:36:00.204404 | orchestrator | + sleep 5 2025-09-19 00:36:05.210003 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 00:36:05.247911 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:36:05.248012 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 00:36:05.248027 | orchestrator | + local max_attempts=60 2025-09-19 00:36:05.248040 | orchestrator | + local name=kolla-ansible 2025-09-19 00:36:05.248051 | orchestrator | + local attempt_num=1 2025-09-19 00:36:05.249151 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 00:36:05.283644 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:36:05.283720 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 00:36:05.283733 | orchestrator | + local max_attempts=60 2025-09-19 00:36:05.283744 | orchestrator | + local name=osism-ansible 2025-09-19 00:36:05.283756 | orchestrator | + local attempt_num=1 2025-09-19 00:36:05.284448 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 00:36:05.317638 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 00:36:05.317716 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 00:36:05.317758 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 00:36:05.505578 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-19 00:36:05.649104 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-19 00:36:05.820738 | orchestrator | ARA in osism-ansible already disabled. 2025-09-19 00:36:05.971627 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-19 00:36:05.972117 | orchestrator | + osism apply gather-facts 2025-09-19 00:36:17.950555 | orchestrator | 2025-09-19 00:36:17 | INFO  | Task 513ea2b4-a06f-4667-96b9-d2f7c3a5ca07 (gather-facts) was prepared for execution. 2025-09-19 00:36:17.950648 | orchestrator | 2025-09-19 00:36:17 | INFO  | It takes a moment until task 513ea2b4-a06f-4667-96b9-d2f7c3a5ca07 (gather-facts) has been started and output is visible here. 2025-09-19 00:36:30.654481 | orchestrator | 2025-09-19 00:36:30.654579 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 00:36:30.654595 | orchestrator | 2025-09-19 00:36:30.654608 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 00:36:30.654634 | orchestrator | Friday 19 September 2025 00:36:21 +0000 (0:00:00.220) 0:00:00.220 ****** 2025-09-19 00:36:30.654646 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:36:30.654658 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:36:30.654669 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:36:30.654681 | orchestrator | ok: [testbed-manager] 2025-09-19 00:36:30.654692 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:36:30.654703 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:36:30.654713 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:36:30.654724 | orchestrator | 2025-09-19 00:36:30.654735 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 00:36:30.654746 | orchestrator | 2025-09-19 00:36:30.654757 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 00:36:30.654769 | orchestrator | Friday 19 September 2025 00:36:29 +0000 (0:00:08.106) 0:00:08.326 ****** 2025-09-19 00:36:30.654780 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:36:30.654791 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:36:30.654802 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:36:30.654813 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:36:30.654824 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:36:30.654835 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:36:30.654845 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:36:30.654856 | orchestrator | 2025-09-19 00:36:30.654867 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:36:30.654878 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654890 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654901 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654912 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654923 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654934 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654945 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:36:30.654956 | orchestrator | 2025-09-19 00:36:30.654967 | orchestrator | 2025-09-19 00:36:30.654977 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:36:30.654988 | orchestrator | Friday 19 September 2025 00:36:30 +0000 (0:00:00.449) 0:00:08.776 ****** 2025-09-19 00:36:30.655021 | orchestrator | =============================================================================== 2025-09-19 00:36:30.655032 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.11s 2025-09-19 00:36:30.655043 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-09-19 00:36:30.899351 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-19 00:36:30.910589 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-19 00:36:30.921008 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-19 00:36:30.931116 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-19 00:36:30.940056 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-19 00:36:30.951659 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-19 00:36:30.965778 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-19 00:36:30.980367 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-19 00:36:30.990608 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-19 00:36:31.000585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-19 00:36:31.009773 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-19 00:36:31.019123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-19 00:36:31.035518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-19 00:36:31.045505 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-19 00:36:31.053583 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-19 00:36:31.062318 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-19 00:36:31.070930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-19 00:36:31.080187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-19 00:36:31.091292 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-19 00:36:31.099301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-19 00:36:31.109167 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-19 00:36:31.601205 | orchestrator | ok: Runtime: 0:23:04.374454 2025-09-19 00:36:31.709146 | 2025-09-19 00:36:31.709362 | TASK [Deploy services] 2025-09-19 00:36:32.243139 | orchestrator | skipping: Conditional result was False 2025-09-19 00:36:32.262300 | 2025-09-19 00:36:32.262460 | TASK [Deploy in a nutshell] 2025-09-19 00:36:32.964489 | orchestrator | + set -e 2025-09-19 00:36:32.964718 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 00:36:32.964754 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 00:36:32.964788 | orchestrator | ++ INTERACTIVE=false 2025-09-19 00:36:32.964812 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 00:36:32.964833 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 00:36:32.964883 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 00:36:32.964935 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 00:36:32.964964 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 00:36:32.964985 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 00:36:32.965002 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 00:36:32.965014 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 00:36:32.965032 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 00:36:32.965043 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 00:36:32.965064 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 00:36:32.965075 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 00:36:32.965096 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 00:36:32.965107 | orchestrator | ++ export ARA=false 2025-09-19 00:36:32.965119 | orchestrator | ++ ARA=false 2025-09-19 00:36:32.965130 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 00:36:32.965142 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 00:36:32.965153 | orchestrator | ++ export TEMPEST=true 2025-09-19 00:36:32.965163 | orchestrator | ++ TEMPEST=true 2025-09-19 00:36:32.965174 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 00:36:32.965191 | orchestrator | ++ IS_ZUUL=true 2025-09-19 00:36:32.965202 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:36:32.965214 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 00:36:32.965225 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 00:36:32.965235 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 00:36:32.965246 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 00:36:32.965290 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 00:36:32.965304 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 00:36:32.965315 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 00:36:32.965326 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 00:36:32.965337 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 00:36:32.965353 | orchestrator | 2025-09-19 00:36:32.965364 | orchestrator | # PULL IMAGES 2025-09-19 00:36:32.965375 | orchestrator | 2025-09-19 00:36:32.965386 | orchestrator | + echo 2025-09-19 00:36:32.965397 | orchestrator | + echo '# PULL IMAGES' 2025-09-19 00:36:32.965408 | orchestrator | + echo 2025-09-19 00:36:32.967049 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 00:36:33.032178 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 00:36:33.032347 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-19 00:36:34.848132 | orchestrator | 2025-09-19 00:36:34 | INFO  | Trying to run play pull-images in environment custom 2025-09-19 00:36:44.959666 | orchestrator | 2025-09-19 00:36:44 | INFO  | Task f1ee9be0-65b6-4f1f-a325-b74c93685a34 (pull-images) was prepared for execution. 2025-09-19 00:36:44.959718 | orchestrator | 2025-09-19 00:36:44 | INFO  | Task f1ee9be0-65b6-4f1f-a325-b74c93685a34 is running in background. No more output. Check ARA for logs. 2025-09-19 00:36:46.869553 | orchestrator | 2025-09-19 00:36:46 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-19 00:36:57.038445 | orchestrator | 2025-09-19 00:36:57 | INFO  | Task a3e6641d-f378-417a-8b81-8e908681117d (wipe-partitions) was prepared for execution. 2025-09-19 00:36:57.038567 | orchestrator | 2025-09-19 00:36:57 | INFO  | It takes a moment until task a3e6641d-f378-417a-8b81-8e908681117d (wipe-partitions) has been started and output is visible here. 2025-09-19 00:37:10.208175 | orchestrator | 2025-09-19 00:37:10.208369 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-19 00:37:10.208388 | orchestrator | 2025-09-19 00:37:10.208400 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-19 00:37:10.208421 | orchestrator | Friday 19 September 2025 00:37:01 +0000 (0:00:00.165) 0:00:00.165 ****** 2025-09-19 00:37:10.208434 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:37:10.208447 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:37:10.208458 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:37:10.208469 | orchestrator | 2025-09-19 00:37:10.208481 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-19 00:37:10.208519 | orchestrator | Friday 19 September 2025 00:37:01 +0000 (0:00:00.588) 0:00:00.753 ****** 2025-09-19 00:37:10.208531 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:10.208543 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:37:10.208554 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:37:10.208569 | orchestrator | 2025-09-19 00:37:10.208580 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-19 00:37:10.208603 | orchestrator | Friday 19 September 2025 00:37:02 +0000 (0:00:00.299) 0:00:01.052 ****** 2025-09-19 00:37:10.208625 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:10.208637 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:37:10.208648 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:37:10.208659 | orchestrator | 2025-09-19 00:37:10.208670 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-19 00:37:10.208681 | orchestrator | Friday 19 September 2025 00:37:02 +0000 (0:00:00.697) 0:00:01.750 ****** 2025-09-19 00:37:10.208692 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:10.208705 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:37:10.208717 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:37:10.208729 | orchestrator | 2025-09-19 00:37:10.208741 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-19 00:37:10.208754 | orchestrator | Friday 19 September 2025 00:37:02 +0000 (0:00:00.279) 0:00:02.029 ****** 2025-09-19 00:37:10.208766 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 00:37:10.208783 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 00:37:10.208796 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 00:37:10.208808 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 00:37:10.208821 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 00:37:10.208833 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 00:37:10.208845 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 00:37:10.208855 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 00:37:10.208866 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 00:37:10.208877 | orchestrator | 2025-09-19 00:37:10.208888 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-19 00:37:10.208899 | orchestrator | Friday 19 September 2025 00:37:04 +0000 (0:00:01.202) 0:00:03.231 ****** 2025-09-19 00:37:10.208911 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 00:37:10.208922 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 00:37:10.208933 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 00:37:10.208944 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 00:37:10.208954 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 00:37:10.208965 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 00:37:10.208976 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 00:37:10.208986 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 00:37:10.208997 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 00:37:10.209008 | orchestrator | 2025-09-19 00:37:10.209018 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-19 00:37:10.209029 | orchestrator | Friday 19 September 2025 00:37:05 +0000 (0:00:01.398) 0:00:04.630 ****** 2025-09-19 00:37:10.209040 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 00:37:10.209051 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 00:37:10.209062 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 00:37:10.209072 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 00:37:10.209089 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 00:37:10.209100 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 00:37:10.209111 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 00:37:10.209122 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 00:37:10.209140 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 00:37:10.209151 | orchestrator | 2025-09-19 00:37:10.209162 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-19 00:37:10.209173 | orchestrator | Friday 19 September 2025 00:37:08 +0000 (0:00:02.996) 0:00:07.626 ****** 2025-09-19 00:37:10.209184 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:37:10.209195 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:37:10.209205 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:37:10.209216 | orchestrator | 2025-09-19 00:37:10.209227 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-19 00:37:10.209238 | orchestrator | Friday 19 September 2025 00:37:09 +0000 (0:00:00.611) 0:00:08.238 ****** 2025-09-19 00:37:10.209275 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:37:10.209286 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:37:10.209297 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:37:10.209308 | orchestrator | 2025-09-19 00:37:10.209319 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:37:10.209331 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:10.209343 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:10.209372 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:10.209384 | orchestrator | 2025-09-19 00:37:10.209395 | orchestrator | 2025-09-19 00:37:10.209406 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:37:10.209417 | orchestrator | Friday 19 September 2025 00:37:09 +0000 (0:00:00.628) 0:00:08.867 ****** 2025-09-19 00:37:10.209428 | orchestrator | =============================================================================== 2025-09-19 00:37:10.209439 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.00s 2025-09-19 00:37:10.209450 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-09-19 00:37:10.209460 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2025-09-19 00:37:10.209471 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-09-19 00:37:10.209482 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-09-19 00:37:10.209493 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-09-19 00:37:10.209504 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-09-19 00:37:10.209515 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2025-09-19 00:37:10.209525 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-09-19 00:37:22.414779 | orchestrator | 2025-09-19 00:37:22 | INFO  | Task fb46abbb-01db-427e-b072-5043fbcade94 (facts) was prepared for execution. 2025-09-19 00:37:22.414884 | orchestrator | 2025-09-19 00:37:22 | INFO  | It takes a moment until task fb46abbb-01db-427e-b072-5043fbcade94 (facts) has been started and output is visible here. 2025-09-19 00:37:33.914277 | orchestrator | 2025-09-19 00:37:33.914384 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 00:37:33.914398 | orchestrator | 2025-09-19 00:37:33.914409 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 00:37:33.914420 | orchestrator | Friday 19 September 2025 00:37:26 +0000 (0:00:00.267) 0:00:00.267 ****** 2025-09-19 00:37:33.914430 | orchestrator | ok: [testbed-manager] 2025-09-19 00:37:33.914441 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:37:33.914451 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:37:33.914460 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:37:33.914494 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:33.914504 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:37:33.914513 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:37:33.914523 | orchestrator | 2025-09-19 00:37:33.914535 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 00:37:33.914544 | orchestrator | Friday 19 September 2025 00:37:27 +0000 (0:00:00.944) 0:00:01.212 ****** 2025-09-19 00:37:33.914554 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:37:33.914565 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:37:33.914575 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:37:33.914584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:37:33.914593 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:33.914603 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:37:33.914613 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:37:33.914622 | orchestrator | 2025-09-19 00:37:33.914632 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 00:37:33.914641 | orchestrator | 2025-09-19 00:37:33.914651 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 00:37:33.914661 | orchestrator | Friday 19 September 2025 00:37:28 +0000 (0:00:01.115) 0:00:02.328 ****** 2025-09-19 00:37:33.914670 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:37:33.914680 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:37:33.914689 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:37:33.914700 | orchestrator | ok: [testbed-manager] 2025-09-19 00:37:33.914709 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:37:33.914719 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:37:33.914729 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:33.914738 | orchestrator | 2025-09-19 00:37:33.914748 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 00:37:33.914757 | orchestrator | 2025-09-19 00:37:33.914767 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 00:37:33.914792 | orchestrator | Friday 19 September 2025 00:37:33 +0000 (0:00:04.482) 0:00:06.810 ****** 2025-09-19 00:37:33.914803 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:37:33.914814 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:37:33.914825 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:37:33.914836 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:37:33.914846 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:33.914857 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:37:33.914867 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:37:33.914878 | orchestrator | 2025-09-19 00:37:33.914889 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:37:33.914901 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914912 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914923 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914935 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914946 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914957 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914969 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:37:33.914980 | orchestrator | 2025-09-19 00:37:33.914991 | orchestrator | 2025-09-19 00:37:33.915002 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:37:33.915020 | orchestrator | Friday 19 September 2025 00:37:33 +0000 (0:00:00.524) 0:00:07.334 ****** 2025-09-19 00:37:33.915032 | orchestrator | =============================================================================== 2025-09-19 00:37:33.915043 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.48s 2025-09-19 00:37:33.915054 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2025-09-19 00:37:33.915065 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.94s 2025-09-19 00:37:33.915076 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-19 00:37:36.139355 | orchestrator | 2025-09-19 00:37:36 | INFO  | Task ba1bd605-0573-4fc1-9c50-033ed42bc5c3 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-19 00:37:36.139477 | orchestrator | 2025-09-19 00:37:36 | INFO  | It takes a moment until task ba1bd605-0573-4fc1-9c50-033ed42bc5c3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-19 00:37:47.543013 | orchestrator | 2025-09-19 00:37:47.543147 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 00:37:47.543175 | orchestrator | 2025-09-19 00:37:47.543196 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 00:37:47.543278 | orchestrator | Friday 19 September 2025 00:37:40 +0000 (0:00:00.291) 0:00:00.291 ****** 2025-09-19 00:37:47.543301 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 00:37:47.543320 | orchestrator | 2025-09-19 00:37:47.543340 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 00:37:47.543358 | orchestrator | Friday 19 September 2025 00:37:40 +0000 (0:00:00.224) 0:00:00.515 ****** 2025-09-19 00:37:47.543379 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:47.543399 | orchestrator | 2025-09-19 00:37:47.543417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.543437 | orchestrator | Friday 19 September 2025 00:37:40 +0000 (0:00:00.200) 0:00:00.716 ****** 2025-09-19 00:37:47.543457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 00:37:47.543476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 00:37:47.543496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 00:37:47.543516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 00:37:47.543535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 00:37:47.543555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 00:37:47.543574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 00:37:47.543592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 00:37:47.543611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 00:37:47.543632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 00:37:47.543651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 00:37:47.543682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 00:37:47.543700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 00:37:47.543720 | orchestrator | 2025-09-19 00:37:47.543738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.543755 | orchestrator | Friday 19 September 2025 00:37:40 +0000 (0:00:00.322) 0:00:01.039 ****** 2025-09-19 00:37:47.543773 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.543791 | orchestrator | 2025-09-19 00:37:47.543837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.543857 | orchestrator | Friday 19 September 2025 00:37:41 +0000 (0:00:00.407) 0:00:01.446 ****** 2025-09-19 00:37:47.543877 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.543895 | orchestrator | 2025-09-19 00:37:47.543913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.543932 | orchestrator | Friday 19 September 2025 00:37:41 +0000 (0:00:00.181) 0:00:01.628 ****** 2025-09-19 00:37:47.543954 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.543980 | orchestrator | 2025-09-19 00:37:47.544009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544029 | orchestrator | Friday 19 September 2025 00:37:41 +0000 (0:00:00.177) 0:00:01.806 ****** 2025-09-19 00:37:47.544048 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544066 | orchestrator | 2025-09-19 00:37:47.544091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544109 | orchestrator | Friday 19 September 2025 00:37:41 +0000 (0:00:00.184) 0:00:01.990 ****** 2025-09-19 00:37:47.544127 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544146 | orchestrator | 2025-09-19 00:37:47.544164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544183 | orchestrator | Friday 19 September 2025 00:37:42 +0000 (0:00:00.163) 0:00:02.154 ****** 2025-09-19 00:37:47.544194 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544205 | orchestrator | 2025-09-19 00:37:47.544250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544263 | orchestrator | Friday 19 September 2025 00:37:42 +0000 (0:00:00.177) 0:00:02.331 ****** 2025-09-19 00:37:47.544274 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544284 | orchestrator | 2025-09-19 00:37:47.544295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544307 | orchestrator | Friday 19 September 2025 00:37:42 +0000 (0:00:00.184) 0:00:02.516 ****** 2025-09-19 00:37:47.544317 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544328 | orchestrator | 2025-09-19 00:37:47.544339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544350 | orchestrator | Friday 19 September 2025 00:37:42 +0000 (0:00:00.186) 0:00:02.702 ****** 2025-09-19 00:37:47.544361 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13) 2025-09-19 00:37:47.544373 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13) 2025-09-19 00:37:47.544383 | orchestrator | 2025-09-19 00:37:47.544394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544405 | orchestrator | Friday 19 September 2025 00:37:43 +0000 (0:00:00.372) 0:00:03.075 ****** 2025-09-19 00:37:47.544435 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d) 2025-09-19 00:37:47.544447 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d) 2025-09-19 00:37:47.544458 | orchestrator | 2025-09-19 00:37:47.544469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544479 | orchestrator | Friday 19 September 2025 00:37:43 +0000 (0:00:00.405) 0:00:03.481 ****** 2025-09-19 00:37:47.544490 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f) 2025-09-19 00:37:47.544501 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f) 2025-09-19 00:37:47.544512 | orchestrator | 2025-09-19 00:37:47.544522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544533 | orchestrator | Friday 19 September 2025 00:37:44 +0000 (0:00:00.654) 0:00:04.135 ****** 2025-09-19 00:37:47.544544 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402) 2025-09-19 00:37:47.544566 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402) 2025-09-19 00:37:47.544577 | orchestrator | 2025-09-19 00:37:47.544587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:47.544598 | orchestrator | Friday 19 September 2025 00:37:44 +0000 (0:00:00.647) 0:00:04.783 ****** 2025-09-19 00:37:47.544609 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 00:37:47.544620 | orchestrator | 2025-09-19 00:37:47.544630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.544648 | orchestrator | Friday 19 September 2025 00:37:45 +0000 (0:00:00.788) 0:00:05.571 ****** 2025-09-19 00:37:47.544659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 00:37:47.544670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 00:37:47.544681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 00:37:47.544691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 00:37:47.544702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 00:37:47.544712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 00:37:47.544723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 00:37:47.544733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 00:37:47.544744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 00:37:47.544754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 00:37:47.544765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 00:37:47.544775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 00:37:47.544786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 00:37:47.544797 | orchestrator | 2025-09-19 00:37:47.544807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.544818 | orchestrator | Friday 19 September 2025 00:37:45 +0000 (0:00:00.365) 0:00:05.937 ****** 2025-09-19 00:37:47.544829 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544840 | orchestrator | 2025-09-19 00:37:47.544850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.544861 | orchestrator | Friday 19 September 2025 00:37:46 +0000 (0:00:00.206) 0:00:06.143 ****** 2025-09-19 00:37:47.544872 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544883 | orchestrator | 2025-09-19 00:37:47.544893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.544904 | orchestrator | Friday 19 September 2025 00:37:46 +0000 (0:00:00.199) 0:00:06.343 ****** 2025-09-19 00:37:47.544914 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544925 | orchestrator | 2025-09-19 00:37:47.544936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.544946 | orchestrator | Friday 19 September 2025 00:37:46 +0000 (0:00:00.198) 0:00:06.541 ****** 2025-09-19 00:37:47.544957 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.544967 | orchestrator | 2025-09-19 00:37:47.544979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.544989 | orchestrator | Friday 19 September 2025 00:37:46 +0000 (0:00:00.199) 0:00:06.741 ****** 2025-09-19 00:37:47.545000 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.545011 | orchestrator | 2025-09-19 00:37:47.545021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.545049 | orchestrator | Friday 19 September 2025 00:37:46 +0000 (0:00:00.235) 0:00:06.977 ****** 2025-09-19 00:37:47.545069 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.545086 | orchestrator | 2025-09-19 00:37:47.545103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.545131 | orchestrator | Friday 19 September 2025 00:37:47 +0000 (0:00:00.208) 0:00:07.185 ****** 2025-09-19 00:37:47.545152 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:47.545172 | orchestrator | 2025-09-19 00:37:47.545191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:47.545247 | orchestrator | Friday 19 September 2025 00:37:47 +0000 (0:00:00.186) 0:00:07.372 ****** 2025-09-19 00:37:47.545272 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.171579 | orchestrator | 2025-09-19 00:37:55.171684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:55.171701 | orchestrator | Friday 19 September 2025 00:37:47 +0000 (0:00:00.211) 0:00:07.583 ****** 2025-09-19 00:37:55.171712 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 00:37:55.171725 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 00:37:55.171737 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 00:37:55.171748 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 00:37:55.171758 | orchestrator | 2025-09-19 00:37:55.171770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:55.171781 | orchestrator | Friday 19 September 2025 00:37:48 +0000 (0:00:01.012) 0:00:08.596 ****** 2025-09-19 00:37:55.171792 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.171803 | orchestrator | 2025-09-19 00:37:55.171814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:55.171825 | orchestrator | Friday 19 September 2025 00:37:48 +0000 (0:00:00.196) 0:00:08.792 ****** 2025-09-19 00:37:55.171836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.171846 | orchestrator | 2025-09-19 00:37:55.171857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:55.171868 | orchestrator | Friday 19 September 2025 00:37:48 +0000 (0:00:00.225) 0:00:09.017 ****** 2025-09-19 00:37:55.171879 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.171890 | orchestrator | 2025-09-19 00:37:55.171901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:37:55.171911 | orchestrator | Friday 19 September 2025 00:37:49 +0000 (0:00:00.202) 0:00:09.220 ****** 2025-09-19 00:37:55.171922 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.171933 | orchestrator | 2025-09-19 00:37:55.171944 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 00:37:55.171955 | orchestrator | Friday 19 September 2025 00:37:49 +0000 (0:00:00.199) 0:00:09.420 ****** 2025-09-19 00:37:55.171965 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-19 00:37:55.171976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-19 00:37:55.171987 | orchestrator | 2025-09-19 00:37:55.171998 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 00:37:55.172009 | orchestrator | Friday 19 September 2025 00:37:49 +0000 (0:00:00.173) 0:00:09.594 ****** 2025-09-19 00:37:55.172038 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172050 | orchestrator | 2025-09-19 00:37:55.172062 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 00:37:55.172073 | orchestrator | Friday 19 September 2025 00:37:49 +0000 (0:00:00.138) 0:00:09.733 ****** 2025-09-19 00:37:55.172084 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172094 | orchestrator | 2025-09-19 00:37:55.172108 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 00:37:55.172120 | orchestrator | Friday 19 September 2025 00:37:49 +0000 (0:00:00.129) 0:00:09.862 ****** 2025-09-19 00:37:55.172132 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172145 | orchestrator | 2025-09-19 00:37:55.172183 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 00:37:55.172196 | orchestrator | Friday 19 September 2025 00:37:49 +0000 (0:00:00.146) 0:00:10.009 ****** 2025-09-19 00:37:55.172247 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:55.172261 | orchestrator | 2025-09-19 00:37:55.172273 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 00:37:55.172285 | orchestrator | Friday 19 September 2025 00:37:50 +0000 (0:00:00.133) 0:00:10.142 ****** 2025-09-19 00:37:55.172299 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bc7aa585-dea2-57c4-a9fa-18818632dc3c'}}) 2025-09-19 00:37:55.172311 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba978b90-a663-5d0c-8f05-4b4e8986f79e'}}) 2025-09-19 00:37:55.172323 | orchestrator | 2025-09-19 00:37:55.172336 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 00:37:55.172348 | orchestrator | Friday 19 September 2025 00:37:50 +0000 (0:00:00.192) 0:00:10.335 ****** 2025-09-19 00:37:55.172361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bc7aa585-dea2-57c4-a9fa-18818632dc3c'}})  2025-09-19 00:37:55.172381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba978b90-a663-5d0c-8f05-4b4e8986f79e'}})  2025-09-19 00:37:55.172394 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172406 | orchestrator | 2025-09-19 00:37:55.172419 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 00:37:55.172431 | orchestrator | Friday 19 September 2025 00:37:50 +0000 (0:00:00.160) 0:00:10.495 ****** 2025-09-19 00:37:55.172444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bc7aa585-dea2-57c4-a9fa-18818632dc3c'}})  2025-09-19 00:37:55.172457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba978b90-a663-5d0c-8f05-4b4e8986f79e'}})  2025-09-19 00:37:55.172468 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172479 | orchestrator | 2025-09-19 00:37:55.172490 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 00:37:55.172501 | orchestrator | Friday 19 September 2025 00:37:50 +0000 (0:00:00.151) 0:00:10.647 ****** 2025-09-19 00:37:55.172512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bc7aa585-dea2-57c4-a9fa-18818632dc3c'}})  2025-09-19 00:37:55.172523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba978b90-a663-5d0c-8f05-4b4e8986f79e'}})  2025-09-19 00:37:55.172534 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172545 | orchestrator | 2025-09-19 00:37:55.172573 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 00:37:55.172585 | orchestrator | Friday 19 September 2025 00:37:50 +0000 (0:00:00.361) 0:00:11.008 ****** 2025-09-19 00:37:55.172596 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:55.172606 | orchestrator | 2025-09-19 00:37:55.172623 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 00:37:55.172635 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.153) 0:00:11.161 ****** 2025-09-19 00:37:55.172646 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:37:55.172656 | orchestrator | 2025-09-19 00:37:55.172667 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 00:37:55.172678 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.147) 0:00:11.309 ****** 2025-09-19 00:37:55.172689 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172699 | orchestrator | 2025-09-19 00:37:55.172710 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 00:37:55.172721 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.130) 0:00:11.440 ****** 2025-09-19 00:37:55.172731 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172742 | orchestrator | 2025-09-19 00:37:55.172753 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 00:37:55.172771 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.143) 0:00:11.584 ****** 2025-09-19 00:37:55.172782 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172793 | orchestrator | 2025-09-19 00:37:55.172804 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 00:37:55.172815 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.147) 0:00:11.731 ****** 2025-09-19 00:37:55.172826 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 00:37:55.172836 | orchestrator |  "ceph_osd_devices": { 2025-09-19 00:37:55.172847 | orchestrator |  "sdb": { 2025-09-19 00:37:55.172859 | orchestrator |  "osd_lvm_uuid": "bc7aa585-dea2-57c4-a9fa-18818632dc3c" 2025-09-19 00:37:55.172870 | orchestrator |  }, 2025-09-19 00:37:55.172881 | orchestrator |  "sdc": { 2025-09-19 00:37:55.172891 | orchestrator |  "osd_lvm_uuid": "ba978b90-a663-5d0c-8f05-4b4e8986f79e" 2025-09-19 00:37:55.172902 | orchestrator |  } 2025-09-19 00:37:55.172913 | orchestrator |  } 2025-09-19 00:37:55.172924 | orchestrator | } 2025-09-19 00:37:55.172935 | orchestrator | 2025-09-19 00:37:55.172946 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 00:37:55.172956 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.133) 0:00:11.864 ****** 2025-09-19 00:37:55.172967 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.172978 | orchestrator | 2025-09-19 00:37:55.172988 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 00:37:55.172999 | orchestrator | Friday 19 September 2025 00:37:51 +0000 (0:00:00.137) 0:00:12.003 ****** 2025-09-19 00:37:55.173010 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.173021 | orchestrator | 2025-09-19 00:37:55.173031 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 00:37:55.173042 | orchestrator | Friday 19 September 2025 00:37:52 +0000 (0:00:00.131) 0:00:12.134 ****** 2025-09-19 00:37:55.173053 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:37:55.173064 | orchestrator | 2025-09-19 00:37:55.173074 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 00:37:55.173085 | orchestrator | Friday 19 September 2025 00:37:52 +0000 (0:00:00.136) 0:00:12.270 ****** 2025-09-19 00:37:55.173096 | orchestrator | changed: [testbed-node-3] => { 2025-09-19 00:37:55.173107 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 00:37:55.173117 | orchestrator |  "ceph_osd_devices": { 2025-09-19 00:37:55.173128 | orchestrator |  "sdb": { 2025-09-19 00:37:55.173139 | orchestrator |  "osd_lvm_uuid": "bc7aa585-dea2-57c4-a9fa-18818632dc3c" 2025-09-19 00:37:55.173149 | orchestrator |  }, 2025-09-19 00:37:55.173160 | orchestrator |  "sdc": { 2025-09-19 00:37:55.173171 | orchestrator |  "osd_lvm_uuid": "ba978b90-a663-5d0c-8f05-4b4e8986f79e" 2025-09-19 00:37:55.173182 | orchestrator |  } 2025-09-19 00:37:55.173193 | orchestrator |  }, 2025-09-19 00:37:55.173220 | orchestrator |  "lvm_volumes": [ 2025-09-19 00:37:55.173232 | orchestrator |  { 2025-09-19 00:37:55.173243 | orchestrator |  "data": "osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c", 2025-09-19 00:37:55.173254 | orchestrator |  "data_vg": "ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c" 2025-09-19 00:37:55.173264 | orchestrator |  }, 2025-09-19 00:37:55.173275 | orchestrator |  { 2025-09-19 00:37:55.173286 | orchestrator |  "data": "osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e", 2025-09-19 00:37:55.173296 | orchestrator |  "data_vg": "ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e" 2025-09-19 00:37:55.173307 | orchestrator |  } 2025-09-19 00:37:55.173318 | orchestrator |  ] 2025-09-19 00:37:55.173328 | orchestrator |  } 2025-09-19 00:37:55.173339 | orchestrator | } 2025-09-19 00:37:55.173350 | orchestrator | 2025-09-19 00:37:55.173366 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 00:37:55.173377 | orchestrator | Friday 19 September 2025 00:37:52 +0000 (0:00:00.201) 0:00:12.471 ****** 2025-09-19 00:37:55.173395 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 00:37:55.173406 | orchestrator | 2025-09-19 00:37:55.173417 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 00:37:55.173427 | orchestrator | 2025-09-19 00:37:55.173438 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 00:37:55.173449 | orchestrator | Friday 19 September 2025 00:37:54 +0000 (0:00:02.240) 0:00:14.711 ****** 2025-09-19 00:37:55.173460 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 00:37:55.173470 | orchestrator | 2025-09-19 00:37:55.173481 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 00:37:55.173492 | orchestrator | Friday 19 September 2025 00:37:54 +0000 (0:00:00.272) 0:00:14.984 ****** 2025-09-19 00:37:55.173503 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:37:55.173514 | orchestrator | 2025-09-19 00:37:55.173524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:37:55.173542 | orchestrator | Friday 19 September 2025 00:37:55 +0000 (0:00:00.229) 0:00:15.213 ****** 2025-09-19 00:38:03.288116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 00:38:03.288258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 00:38:03.288275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 00:38:03.288286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 00:38:03.288297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 00:38:03.288308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 00:38:03.288318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 00:38:03.288328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 00:38:03.288339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 00:38:03.288350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 00:38:03.288360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 00:38:03.288370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 00:38:03.288381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 00:38:03.288392 | orchestrator | 2025-09-19 00:38:03.288409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288421 | orchestrator | Friday 19 September 2025 00:37:55 +0000 (0:00:00.381) 0:00:15.595 ****** 2025-09-19 00:38:03.288433 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288445 | orchestrator | 2025-09-19 00:38:03.288456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288467 | orchestrator | Friday 19 September 2025 00:37:55 +0000 (0:00:00.262) 0:00:15.858 ****** 2025-09-19 00:38:03.288478 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288489 | orchestrator | 2025-09-19 00:38:03.288500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288513 | orchestrator | Friday 19 September 2025 00:37:56 +0000 (0:00:00.216) 0:00:16.074 ****** 2025-09-19 00:38:03.288526 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288539 | orchestrator | 2025-09-19 00:38:03.288552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288564 | orchestrator | Friday 19 September 2025 00:37:56 +0000 (0:00:00.205) 0:00:16.280 ****** 2025-09-19 00:38:03.288577 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288590 | orchestrator | 2025-09-19 00:38:03.288626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288639 | orchestrator | Friday 19 September 2025 00:37:56 +0000 (0:00:00.203) 0:00:16.483 ****** 2025-09-19 00:38:03.288652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288665 | orchestrator | 2025-09-19 00:38:03.288677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288690 | orchestrator | Friday 19 September 2025 00:37:56 +0000 (0:00:00.193) 0:00:16.677 ****** 2025-09-19 00:38:03.288703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288716 | orchestrator | 2025-09-19 00:38:03.288729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288758 | orchestrator | Friday 19 September 2025 00:37:57 +0000 (0:00:00.605) 0:00:17.282 ****** 2025-09-19 00:38:03.288771 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288784 | orchestrator | 2025-09-19 00:38:03.288797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288810 | orchestrator | Friday 19 September 2025 00:37:57 +0000 (0:00:00.200) 0:00:17.483 ****** 2025-09-19 00:38:03.288822 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.288834 | orchestrator | 2025-09-19 00:38:03.288847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288859 | orchestrator | Friday 19 September 2025 00:37:57 +0000 (0:00:00.191) 0:00:17.674 ****** 2025-09-19 00:38:03.288872 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9) 2025-09-19 00:38:03.288887 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9) 2025-09-19 00:38:03.288900 | orchestrator | 2025-09-19 00:38:03.288912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288925 | orchestrator | Friday 19 September 2025 00:37:58 +0000 (0:00:00.422) 0:00:18.096 ****** 2025-09-19 00:38:03.288938 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3) 2025-09-19 00:38:03.288949 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3) 2025-09-19 00:38:03.288960 | orchestrator | 2025-09-19 00:38:03.288971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.288982 | orchestrator | Friday 19 September 2025 00:37:58 +0000 (0:00:00.441) 0:00:18.538 ****** 2025-09-19 00:38:03.288993 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521) 2025-09-19 00:38:03.289004 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521) 2025-09-19 00:38:03.289014 | orchestrator | 2025-09-19 00:38:03.289025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.289036 | orchestrator | Friday 19 September 2025 00:37:58 +0000 (0:00:00.425) 0:00:18.964 ****** 2025-09-19 00:38:03.289066 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe) 2025-09-19 00:38:03.289078 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe) 2025-09-19 00:38:03.289089 | orchestrator | 2025-09-19 00:38:03.289100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:03.289111 | orchestrator | Friday 19 September 2025 00:37:59 +0000 (0:00:00.462) 0:00:19.426 ****** 2025-09-19 00:38:03.289122 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 00:38:03.289133 | orchestrator | 2025-09-19 00:38:03.289145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289156 | orchestrator | Friday 19 September 2025 00:37:59 +0000 (0:00:00.336) 0:00:19.763 ****** 2025-09-19 00:38:03.289167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 00:38:03.289177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 00:38:03.289196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 00:38:03.289229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 00:38:03.289240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 00:38:03.289250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 00:38:03.289261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 00:38:03.289272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 00:38:03.289283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 00:38:03.289293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 00:38:03.289304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 00:38:03.289314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 00:38:03.289325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 00:38:03.289336 | orchestrator | 2025-09-19 00:38:03.289346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289357 | orchestrator | Friday 19 September 2025 00:38:00 +0000 (0:00:00.371) 0:00:20.135 ****** 2025-09-19 00:38:03.289368 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289379 | orchestrator | 2025-09-19 00:38:03.289390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289401 | orchestrator | Friday 19 September 2025 00:38:00 +0000 (0:00:00.201) 0:00:20.336 ****** 2025-09-19 00:38:03.289412 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289422 | orchestrator | 2025-09-19 00:38:03.289439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289450 | orchestrator | Friday 19 September 2025 00:38:01 +0000 (0:00:00.721) 0:00:21.057 ****** 2025-09-19 00:38:03.289461 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289471 | orchestrator | 2025-09-19 00:38:03.289482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289493 | orchestrator | Friday 19 September 2025 00:38:01 +0000 (0:00:00.253) 0:00:21.311 ****** 2025-09-19 00:38:03.289504 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289514 | orchestrator | 2025-09-19 00:38:03.289526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289537 | orchestrator | Friday 19 September 2025 00:38:01 +0000 (0:00:00.234) 0:00:21.546 ****** 2025-09-19 00:38:03.289548 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289558 | orchestrator | 2025-09-19 00:38:03.289569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289580 | orchestrator | Friday 19 September 2025 00:38:01 +0000 (0:00:00.202) 0:00:21.748 ****** 2025-09-19 00:38:03.289591 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289602 | orchestrator | 2025-09-19 00:38:03.289612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289623 | orchestrator | Friday 19 September 2025 00:38:01 +0000 (0:00:00.213) 0:00:21.961 ****** 2025-09-19 00:38:03.289634 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289645 | orchestrator | 2025-09-19 00:38:03.289655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289666 | orchestrator | Friday 19 September 2025 00:38:02 +0000 (0:00:00.265) 0:00:22.227 ****** 2025-09-19 00:38:03.289677 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289688 | orchestrator | 2025-09-19 00:38:03.289699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289716 | orchestrator | Friday 19 September 2025 00:38:02 +0000 (0:00:00.229) 0:00:22.456 ****** 2025-09-19 00:38:03.289727 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 00:38:03.289739 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 00:38:03.289750 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 00:38:03.289760 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 00:38:03.289771 | orchestrator | 2025-09-19 00:38:03.289782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:03.289793 | orchestrator | Friday 19 September 2025 00:38:03 +0000 (0:00:00.684) 0:00:23.141 ****** 2025-09-19 00:38:03.289804 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:03.289814 | orchestrator | 2025-09-19 00:38:03.289832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:08.957880 | orchestrator | Friday 19 September 2025 00:38:03 +0000 (0:00:00.191) 0:00:23.332 ****** 2025-09-19 00:38:08.957967 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.957983 | orchestrator | 2025-09-19 00:38:08.957995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:08.958007 | orchestrator | Friday 19 September 2025 00:38:03 +0000 (0:00:00.181) 0:00:23.514 ****** 2025-09-19 00:38:08.958086 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958108 | orchestrator | 2025-09-19 00:38:08.958129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:08.958150 | orchestrator | Friday 19 September 2025 00:38:03 +0000 (0:00:00.195) 0:00:23.709 ****** 2025-09-19 00:38:08.958171 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958191 | orchestrator | 2025-09-19 00:38:08.958274 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 00:38:08.958286 | orchestrator | Friday 19 September 2025 00:38:03 +0000 (0:00:00.187) 0:00:23.896 ****** 2025-09-19 00:38:08.958297 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-19 00:38:08.958307 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-19 00:38:08.958318 | orchestrator | 2025-09-19 00:38:08.958329 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 00:38:08.958340 | orchestrator | Friday 19 September 2025 00:38:04 +0000 (0:00:00.365) 0:00:24.262 ****** 2025-09-19 00:38:08.958351 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958362 | orchestrator | 2025-09-19 00:38:08.958372 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 00:38:08.958383 | orchestrator | Friday 19 September 2025 00:38:04 +0000 (0:00:00.119) 0:00:24.382 ****** 2025-09-19 00:38:08.958394 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958405 | orchestrator | 2025-09-19 00:38:08.958416 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 00:38:08.958427 | orchestrator | Friday 19 September 2025 00:38:04 +0000 (0:00:00.135) 0:00:24.517 ****** 2025-09-19 00:38:08.958440 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958454 | orchestrator | 2025-09-19 00:38:08.958467 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 00:38:08.958479 | orchestrator | Friday 19 September 2025 00:38:04 +0000 (0:00:00.134) 0:00:24.652 ****** 2025-09-19 00:38:08.958491 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:38:08.958505 | orchestrator | 2025-09-19 00:38:08.958517 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 00:38:08.958530 | orchestrator | Friday 19 September 2025 00:38:04 +0000 (0:00:00.137) 0:00:24.789 ****** 2025-09-19 00:38:08.958542 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7c9f8b51-166c-5055-bfcb-65abe80d3110'}}) 2025-09-19 00:38:08.958555 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25e4de26-ffd2-5ba5-a3e7-287c918a347b'}}) 2025-09-19 00:38:08.958568 | orchestrator | 2025-09-19 00:38:08.958581 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 00:38:08.958616 | orchestrator | Friday 19 September 2025 00:38:04 +0000 (0:00:00.173) 0:00:24.963 ****** 2025-09-19 00:38:08.958629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7c9f8b51-166c-5055-bfcb-65abe80d3110'}})  2025-09-19 00:38:08.958643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25e4de26-ffd2-5ba5-a3e7-287c918a347b'}})  2025-09-19 00:38:08.958656 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958669 | orchestrator | 2025-09-19 00:38:08.958695 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 00:38:08.958708 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.150) 0:00:25.113 ****** 2025-09-19 00:38:08.958721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7c9f8b51-166c-5055-bfcb-65abe80d3110'}})  2025-09-19 00:38:08.958734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25e4de26-ffd2-5ba5-a3e7-287c918a347b'}})  2025-09-19 00:38:08.958747 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958759 | orchestrator | 2025-09-19 00:38:08.958772 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 00:38:08.958784 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.118) 0:00:25.232 ****** 2025-09-19 00:38:08.958796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7c9f8b51-166c-5055-bfcb-65abe80d3110'}})  2025-09-19 00:38:08.958807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25e4de26-ffd2-5ba5-a3e7-287c918a347b'}})  2025-09-19 00:38:08.958818 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958829 | orchestrator | 2025-09-19 00:38:08.958840 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 00:38:08.958851 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.140) 0:00:25.373 ****** 2025-09-19 00:38:08.958861 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:38:08.958872 | orchestrator | 2025-09-19 00:38:08.958883 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 00:38:08.958894 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.125) 0:00:25.499 ****** 2025-09-19 00:38:08.958905 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:38:08.958916 | orchestrator | 2025-09-19 00:38:08.958927 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 00:38:08.958938 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.121) 0:00:25.621 ****** 2025-09-19 00:38:08.958949 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.958960 | orchestrator | 2025-09-19 00:38:08.958988 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 00:38:08.959000 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.115) 0:00:25.736 ****** 2025-09-19 00:38:08.959011 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.959021 | orchestrator | 2025-09-19 00:38:08.959032 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 00:38:08.959043 | orchestrator | Friday 19 September 2025 00:38:05 +0000 (0:00:00.261) 0:00:25.998 ****** 2025-09-19 00:38:08.959054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.959065 | orchestrator | 2025-09-19 00:38:08.959075 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 00:38:08.959086 | orchestrator | Friday 19 September 2025 00:38:06 +0000 (0:00:00.128) 0:00:26.127 ****** 2025-09-19 00:38:08.959097 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 00:38:08.959107 | orchestrator |  "ceph_osd_devices": { 2025-09-19 00:38:08.959118 | orchestrator |  "sdb": { 2025-09-19 00:38:08.959130 | orchestrator |  "osd_lvm_uuid": "7c9f8b51-166c-5055-bfcb-65abe80d3110" 2025-09-19 00:38:08.959141 | orchestrator |  }, 2025-09-19 00:38:08.959152 | orchestrator |  "sdc": { 2025-09-19 00:38:08.959163 | orchestrator |  "osd_lvm_uuid": "25e4de26-ffd2-5ba5-a3e7-287c918a347b" 2025-09-19 00:38:08.959180 | orchestrator |  } 2025-09-19 00:38:08.959191 | orchestrator |  } 2025-09-19 00:38:08.959221 | orchestrator | } 2025-09-19 00:38:08.959232 | orchestrator | 2025-09-19 00:38:08.959243 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 00:38:08.959254 | orchestrator | Friday 19 September 2025 00:38:06 +0000 (0:00:00.096) 0:00:26.224 ****** 2025-09-19 00:38:08.959265 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.959275 | orchestrator | 2025-09-19 00:38:08.959286 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 00:38:08.959297 | orchestrator | Friday 19 September 2025 00:38:06 +0000 (0:00:00.134) 0:00:26.359 ****** 2025-09-19 00:38:08.959307 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.959318 | orchestrator | 2025-09-19 00:38:08.959329 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 00:38:08.959340 | orchestrator | Friday 19 September 2025 00:38:06 +0000 (0:00:00.113) 0:00:26.472 ****** 2025-09-19 00:38:08.959351 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:38:08.959361 | orchestrator | 2025-09-19 00:38:08.959372 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 00:38:08.959383 | orchestrator | Friday 19 September 2025 00:38:06 +0000 (0:00:00.113) 0:00:26.585 ****** 2025-09-19 00:38:08.959393 | orchestrator | changed: [testbed-node-4] => { 2025-09-19 00:38:08.959404 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 00:38:08.959415 | orchestrator |  "ceph_osd_devices": { 2025-09-19 00:38:08.959425 | orchestrator |  "sdb": { 2025-09-19 00:38:08.959436 | orchestrator |  "osd_lvm_uuid": "7c9f8b51-166c-5055-bfcb-65abe80d3110" 2025-09-19 00:38:08.959447 | orchestrator |  }, 2025-09-19 00:38:08.959458 | orchestrator |  "sdc": { 2025-09-19 00:38:08.959469 | orchestrator |  "osd_lvm_uuid": "25e4de26-ffd2-5ba5-a3e7-287c918a347b" 2025-09-19 00:38:08.959480 | orchestrator |  } 2025-09-19 00:38:08.959490 | orchestrator |  }, 2025-09-19 00:38:08.959501 | orchestrator |  "lvm_volumes": [ 2025-09-19 00:38:08.959512 | orchestrator |  { 2025-09-19 00:38:08.959523 | orchestrator |  "data": "osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110", 2025-09-19 00:38:08.959533 | orchestrator |  "data_vg": "ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110" 2025-09-19 00:38:08.959544 | orchestrator |  }, 2025-09-19 00:38:08.959554 | orchestrator |  { 2025-09-19 00:38:08.959565 | orchestrator |  "data": "osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b", 2025-09-19 00:38:08.959576 | orchestrator |  "data_vg": "ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b" 2025-09-19 00:38:08.959586 | orchestrator |  } 2025-09-19 00:38:08.959597 | orchestrator |  ] 2025-09-19 00:38:08.959607 | orchestrator |  } 2025-09-19 00:38:08.959618 | orchestrator | } 2025-09-19 00:38:08.959629 | orchestrator | 2025-09-19 00:38:08.959639 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 00:38:08.959650 | orchestrator | Friday 19 September 2025 00:38:06 +0000 (0:00:00.202) 0:00:26.788 ****** 2025-09-19 00:38:08.959661 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 00:38:08.959672 | orchestrator | 2025-09-19 00:38:08.959682 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 00:38:08.959693 | orchestrator | 2025-09-19 00:38:08.959703 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 00:38:08.959714 | orchestrator | Friday 19 September 2025 00:38:07 +0000 (0:00:00.907) 0:00:27.695 ****** 2025-09-19 00:38:08.959725 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 00:38:08.959736 | orchestrator | 2025-09-19 00:38:08.959746 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 00:38:08.959757 | orchestrator | Friday 19 September 2025 00:38:08 +0000 (0:00:00.380) 0:00:28.075 ****** 2025-09-19 00:38:08.959768 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:38:08.959785 | orchestrator | 2025-09-19 00:38:08.959801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:08.959812 | orchestrator | Friday 19 September 2025 00:38:08 +0000 (0:00:00.515) 0:00:28.591 ****** 2025-09-19 00:38:08.959823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 00:38:08.959834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 00:38:08.959845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 00:38:08.959855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 00:38:08.959866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 00:38:08.959876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 00:38:08.959893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 00:38:16.610983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 00:38:16.611076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 00:38:16.611089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 00:38:16.611099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 00:38:16.611109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 00:38:16.611118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 00:38:16.611128 | orchestrator | 2025-09-19 00:38:16.611139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611150 | orchestrator | Friday 19 September 2025 00:38:08 +0000 (0:00:00.410) 0:00:29.001 ****** 2025-09-19 00:38:16.611160 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611171 | orchestrator | 2025-09-19 00:38:16.611180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611255 | orchestrator | Friday 19 September 2025 00:38:09 +0000 (0:00:00.204) 0:00:29.206 ****** 2025-09-19 00:38:16.611268 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611278 | orchestrator | 2025-09-19 00:38:16.611288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611298 | orchestrator | Friday 19 September 2025 00:38:09 +0000 (0:00:00.161) 0:00:29.368 ****** 2025-09-19 00:38:16.611307 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611317 | orchestrator | 2025-09-19 00:38:16.611327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611337 | orchestrator | Friday 19 September 2025 00:38:09 +0000 (0:00:00.164) 0:00:29.532 ****** 2025-09-19 00:38:16.611347 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611356 | orchestrator | 2025-09-19 00:38:16.611366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611376 | orchestrator | Friday 19 September 2025 00:38:09 +0000 (0:00:00.164) 0:00:29.696 ****** 2025-09-19 00:38:16.611385 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611395 | orchestrator | 2025-09-19 00:38:16.611405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611414 | orchestrator | Friday 19 September 2025 00:38:09 +0000 (0:00:00.173) 0:00:29.870 ****** 2025-09-19 00:38:16.611427 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611446 | orchestrator | 2025-09-19 00:38:16.611463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611482 | orchestrator | Friday 19 September 2025 00:38:09 +0000 (0:00:00.166) 0:00:30.037 ****** 2025-09-19 00:38:16.611499 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611516 | orchestrator | 2025-09-19 00:38:16.611549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611561 | orchestrator | Friday 19 September 2025 00:38:10 +0000 (0:00:00.169) 0:00:30.206 ****** 2025-09-19 00:38:16.611572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.611583 | orchestrator | 2025-09-19 00:38:16.611594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611605 | orchestrator | Friday 19 September 2025 00:38:10 +0000 (0:00:00.156) 0:00:30.362 ****** 2025-09-19 00:38:16.611616 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3) 2025-09-19 00:38:16.611628 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3) 2025-09-19 00:38:16.611640 | orchestrator | 2025-09-19 00:38:16.611651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611662 | orchestrator | Friday 19 September 2025 00:38:10 +0000 (0:00:00.522) 0:00:30.885 ****** 2025-09-19 00:38:16.611673 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4) 2025-09-19 00:38:16.611683 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4) 2025-09-19 00:38:16.611694 | orchestrator | 2025-09-19 00:38:16.611705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611716 | orchestrator | Friday 19 September 2025 00:38:11 +0000 (0:00:00.670) 0:00:31.555 ****** 2025-09-19 00:38:16.611727 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd) 2025-09-19 00:38:16.611739 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd) 2025-09-19 00:38:16.611750 | orchestrator | 2025-09-19 00:38:16.611760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611771 | orchestrator | Friday 19 September 2025 00:38:11 +0000 (0:00:00.435) 0:00:31.991 ****** 2025-09-19 00:38:16.611782 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576) 2025-09-19 00:38:16.611792 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576) 2025-09-19 00:38:16.611803 | orchestrator | 2025-09-19 00:38:16.611813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:38:16.611824 | orchestrator | Friday 19 September 2025 00:38:12 +0000 (0:00:00.386) 0:00:32.378 ****** 2025-09-19 00:38:16.611835 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 00:38:16.611846 | orchestrator | 2025-09-19 00:38:16.611858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.611869 | orchestrator | Friday 19 September 2025 00:38:12 +0000 (0:00:00.301) 0:00:32.679 ****** 2025-09-19 00:38:16.611895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 00:38:16.611905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 00:38:16.611915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 00:38:16.611924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 00:38:16.611934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 00:38:16.611943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 00:38:16.611975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 00:38:16.611993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 00:38:16.612009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 00:38:16.612025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 00:38:16.612052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 00:38:16.612068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 00:38:16.612081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 00:38:16.612091 | orchestrator | 2025-09-19 00:38:16.612101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612110 | orchestrator | Friday 19 September 2025 00:38:12 +0000 (0:00:00.344) 0:00:33.023 ****** 2025-09-19 00:38:16.612120 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612130 | orchestrator | 2025-09-19 00:38:16.612139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612149 | orchestrator | Friday 19 September 2025 00:38:13 +0000 (0:00:00.205) 0:00:33.229 ****** 2025-09-19 00:38:16.612159 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612168 | orchestrator | 2025-09-19 00:38:16.612177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612212 | orchestrator | Friday 19 September 2025 00:38:13 +0000 (0:00:00.217) 0:00:33.446 ****** 2025-09-19 00:38:16.612226 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612235 | orchestrator | 2025-09-19 00:38:16.612245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612259 | orchestrator | Friday 19 September 2025 00:38:13 +0000 (0:00:00.200) 0:00:33.647 ****** 2025-09-19 00:38:16.612269 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612279 | orchestrator | 2025-09-19 00:38:16.612288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612297 | orchestrator | Friday 19 September 2025 00:38:13 +0000 (0:00:00.191) 0:00:33.838 ****** 2025-09-19 00:38:16.612307 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612316 | orchestrator | 2025-09-19 00:38:16.612326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612335 | orchestrator | Friday 19 September 2025 00:38:13 +0000 (0:00:00.182) 0:00:34.021 ****** 2025-09-19 00:38:16.612345 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612354 | orchestrator | 2025-09-19 00:38:16.612364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612373 | orchestrator | Friday 19 September 2025 00:38:14 +0000 (0:00:00.606) 0:00:34.628 ****** 2025-09-19 00:38:16.612383 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612392 | orchestrator | 2025-09-19 00:38:16.612401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612411 | orchestrator | Friday 19 September 2025 00:38:14 +0000 (0:00:00.219) 0:00:34.848 ****** 2025-09-19 00:38:16.612421 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612430 | orchestrator | 2025-09-19 00:38:16.612439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612449 | orchestrator | Friday 19 September 2025 00:38:14 +0000 (0:00:00.196) 0:00:35.044 ****** 2025-09-19 00:38:16.612459 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 00:38:16.612468 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 00:38:16.612478 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 00:38:16.612487 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 00:38:16.612497 | orchestrator | 2025-09-19 00:38:16.612506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612515 | orchestrator | Friday 19 September 2025 00:38:15 +0000 (0:00:00.649) 0:00:35.695 ****** 2025-09-19 00:38:16.612525 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612534 | orchestrator | 2025-09-19 00:38:16.612544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612553 | orchestrator | Friday 19 September 2025 00:38:15 +0000 (0:00:00.263) 0:00:35.958 ****** 2025-09-19 00:38:16.612570 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612587 | orchestrator | 2025-09-19 00:38:16.612602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612618 | orchestrator | Friday 19 September 2025 00:38:16 +0000 (0:00:00.242) 0:00:36.200 ****** 2025-09-19 00:38:16.612635 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612651 | orchestrator | 2025-09-19 00:38:16.612667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:38:16.612679 | orchestrator | Friday 19 September 2025 00:38:16 +0000 (0:00:00.216) 0:00:36.417 ****** 2025-09-19 00:38:16.612688 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:16.612698 | orchestrator | 2025-09-19 00:38:16.612707 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 00:38:16.612723 | orchestrator | Friday 19 September 2025 00:38:16 +0000 (0:00:00.234) 0:00:36.651 ****** 2025-09-19 00:38:21.120295 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-19 00:38:21.120390 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-19 00:38:21.120405 | orchestrator | 2025-09-19 00:38:21.120417 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 00:38:21.120428 | orchestrator | Friday 19 September 2025 00:38:16 +0000 (0:00:00.181) 0:00:36.833 ****** 2025-09-19 00:38:21.120439 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.120450 | orchestrator | 2025-09-19 00:38:21.120461 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 00:38:21.120472 | orchestrator | Friday 19 September 2025 00:38:16 +0000 (0:00:00.165) 0:00:36.999 ****** 2025-09-19 00:38:21.120483 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.120494 | orchestrator | 2025-09-19 00:38:21.120505 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 00:38:21.120515 | orchestrator | Friday 19 September 2025 00:38:17 +0000 (0:00:00.161) 0:00:37.161 ****** 2025-09-19 00:38:21.120526 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.120537 | orchestrator | 2025-09-19 00:38:21.120548 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 00:38:21.120559 | orchestrator | Friday 19 September 2025 00:38:17 +0000 (0:00:00.144) 0:00:37.305 ****** 2025-09-19 00:38:21.120570 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:38:21.120581 | orchestrator | 2025-09-19 00:38:21.120592 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 00:38:21.120603 | orchestrator | Friday 19 September 2025 00:38:17 +0000 (0:00:00.379) 0:00:37.685 ****** 2025-09-19 00:38:21.120614 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c5ae36c-b075-5e22-9b23-69e08de6e546'}}) 2025-09-19 00:38:21.120626 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}}) 2025-09-19 00:38:21.120637 | orchestrator | 2025-09-19 00:38:21.120648 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 00:38:21.120659 | orchestrator | Friday 19 September 2025 00:38:17 +0000 (0:00:00.167) 0:00:37.853 ****** 2025-09-19 00:38:21.120670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c5ae36c-b075-5e22-9b23-69e08de6e546'}})  2025-09-19 00:38:21.120682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}})  2025-09-19 00:38:21.120693 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.120704 | orchestrator | 2025-09-19 00:38:21.120715 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 00:38:21.120726 | orchestrator | Friday 19 September 2025 00:38:17 +0000 (0:00:00.143) 0:00:37.997 ****** 2025-09-19 00:38:21.120740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c5ae36c-b075-5e22-9b23-69e08de6e546'}})  2025-09-19 00:38:21.120760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}})  2025-09-19 00:38:21.120802 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.120817 | orchestrator | 2025-09-19 00:38:21.120829 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 00:38:21.120842 | orchestrator | Friday 19 September 2025 00:38:18 +0000 (0:00:00.147) 0:00:38.145 ****** 2025-09-19 00:38:21.120855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c5ae36c-b075-5e22-9b23-69e08de6e546'}})  2025-09-19 00:38:21.120882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}})  2025-09-19 00:38:21.120895 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.120908 | orchestrator | 2025-09-19 00:38:21.120920 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 00:38:21.120932 | orchestrator | Friday 19 September 2025 00:38:18 +0000 (0:00:00.141) 0:00:38.287 ****** 2025-09-19 00:38:21.120945 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:38:21.120957 | orchestrator | 2025-09-19 00:38:21.120970 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 00:38:21.120983 | orchestrator | Friday 19 September 2025 00:38:18 +0000 (0:00:00.128) 0:00:38.415 ****** 2025-09-19 00:38:21.120995 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:38:21.121007 | orchestrator | 2025-09-19 00:38:21.121019 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 00:38:21.121032 | orchestrator | Friday 19 September 2025 00:38:18 +0000 (0:00:00.142) 0:00:38.557 ****** 2025-09-19 00:38:21.121044 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.121056 | orchestrator | 2025-09-19 00:38:21.121068 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 00:38:21.121081 | orchestrator | Friday 19 September 2025 00:38:18 +0000 (0:00:00.164) 0:00:38.722 ****** 2025-09-19 00:38:21.121093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.121105 | orchestrator | 2025-09-19 00:38:21.121117 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 00:38:21.121129 | orchestrator | Friday 19 September 2025 00:38:18 +0000 (0:00:00.275) 0:00:38.997 ****** 2025-09-19 00:38:21.121142 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.121154 | orchestrator | 2025-09-19 00:38:21.121166 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 00:38:21.121179 | orchestrator | Friday 19 September 2025 00:38:19 +0000 (0:00:00.166) 0:00:39.164 ****** 2025-09-19 00:38:21.121211 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 00:38:21.121224 | orchestrator |  "ceph_osd_devices": { 2025-09-19 00:38:21.121236 | orchestrator |  "sdb": { 2025-09-19 00:38:21.121248 | orchestrator |  "osd_lvm_uuid": "9c5ae36c-b075-5e22-9b23-69e08de6e546" 2025-09-19 00:38:21.121275 | orchestrator |  }, 2025-09-19 00:38:21.121286 | orchestrator |  "sdc": { 2025-09-19 00:38:21.121297 | orchestrator |  "osd_lvm_uuid": "3271a5cd-b931-506b-9a72-a7bc6b6b65fd" 2025-09-19 00:38:21.121308 | orchestrator |  } 2025-09-19 00:38:21.121319 | orchestrator |  } 2025-09-19 00:38:21.121330 | orchestrator | } 2025-09-19 00:38:21.121341 | orchestrator | 2025-09-19 00:38:21.121352 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 00:38:21.121363 | orchestrator | Friday 19 September 2025 00:38:19 +0000 (0:00:00.160) 0:00:39.324 ****** 2025-09-19 00:38:21.121374 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.121384 | orchestrator | 2025-09-19 00:38:21.121395 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 00:38:21.121406 | orchestrator | Friday 19 September 2025 00:38:19 +0000 (0:00:00.136) 0:00:39.461 ****** 2025-09-19 00:38:21.121416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.121427 | orchestrator | 2025-09-19 00:38:21.121438 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 00:38:21.121449 | orchestrator | Friday 19 September 2025 00:38:19 +0000 (0:00:00.245) 0:00:39.707 ****** 2025-09-19 00:38:21.121466 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:38:21.121477 | orchestrator | 2025-09-19 00:38:21.121488 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 00:38:21.121499 | orchestrator | Friday 19 September 2025 00:38:19 +0000 (0:00:00.098) 0:00:39.805 ****** 2025-09-19 00:38:21.121509 | orchestrator | changed: [testbed-node-5] => { 2025-09-19 00:38:21.121520 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 00:38:21.121531 | orchestrator |  "ceph_osd_devices": { 2025-09-19 00:38:21.121541 | orchestrator |  "sdb": { 2025-09-19 00:38:21.121552 | orchestrator |  "osd_lvm_uuid": "9c5ae36c-b075-5e22-9b23-69e08de6e546" 2025-09-19 00:38:21.121562 | orchestrator |  }, 2025-09-19 00:38:21.121573 | orchestrator |  "sdc": { 2025-09-19 00:38:21.121584 | orchestrator |  "osd_lvm_uuid": "3271a5cd-b931-506b-9a72-a7bc6b6b65fd" 2025-09-19 00:38:21.121595 | orchestrator |  } 2025-09-19 00:38:21.121606 | orchestrator |  }, 2025-09-19 00:38:21.121616 | orchestrator |  "lvm_volumes": [ 2025-09-19 00:38:21.121627 | orchestrator |  { 2025-09-19 00:38:21.121638 | orchestrator |  "data": "osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546", 2025-09-19 00:38:21.121648 | orchestrator |  "data_vg": "ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546" 2025-09-19 00:38:21.121659 | orchestrator |  }, 2025-09-19 00:38:21.121669 | orchestrator |  { 2025-09-19 00:38:21.121680 | orchestrator |  "data": "osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd", 2025-09-19 00:38:21.121691 | orchestrator |  "data_vg": "ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd" 2025-09-19 00:38:21.121702 | orchestrator |  } 2025-09-19 00:38:21.121713 | orchestrator |  ] 2025-09-19 00:38:21.121723 | orchestrator |  } 2025-09-19 00:38:21.121734 | orchestrator | } 2025-09-19 00:38:21.121748 | orchestrator | 2025-09-19 00:38:21.121759 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 00:38:21.121770 | orchestrator | Friday 19 September 2025 00:38:19 +0000 (0:00:00.148) 0:00:39.954 ****** 2025-09-19 00:38:21.121781 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 00:38:21.121792 | orchestrator | 2025-09-19 00:38:21.121802 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:38:21.121813 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 00:38:21.121825 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 00:38:21.121836 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 00:38:21.121847 | orchestrator | 2025-09-19 00:38:21.121858 | orchestrator | 2025-09-19 00:38:21.121868 | orchestrator | 2025-09-19 00:38:21.121879 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:38:21.121890 | orchestrator | Friday 19 September 2025 00:38:21 +0000 (0:00:01.198) 0:00:41.152 ****** 2025-09-19 00:38:21.121900 | orchestrator | =============================================================================== 2025-09-19 00:38:21.121911 | orchestrator | Write configuration file ------------------------------------------------ 4.35s 2025-09-19 00:38:21.121922 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-09-19 00:38:21.121932 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-09-19 00:38:21.121943 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-09-19 00:38:21.121953 | orchestrator | Get initial list of available block devices ----------------------------- 0.95s 2025-09-19 00:38:21.121964 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2025-09-19 00:38:21.121981 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-09-19 00:38:21.121992 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-09-19 00:38:21.122002 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.72s 2025-09-19 00:38:21.122013 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-09-19 00:38:21.122083 | orchestrator | Set WAL devices config data --------------------------------------------- 0.68s 2025-09-19 00:38:21.122094 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-09-19 00:38:21.122104 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-19 00:38:21.122115 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.65s 2025-09-19 00:38:21.122133 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-19 00:38:21.350572 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-19 00:38:21.350649 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.64s 2025-09-19 00:38:21.350662 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-09-19 00:38:21.350672 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 00:38:21.350682 | orchestrator | Print configuration data ------------------------------------------------ 0.55s 2025-09-19 00:38:43.753254 | orchestrator | 2025-09-19 00:38:43 | INFO  | Task d13de9a0-3a7f-4c56-8af7-44f6836d8332 (sync inventory) is running in background. Output coming soon. 2025-09-19 00:39:02.582081 | orchestrator | 2025-09-19 00:38:45 | INFO  | Starting group_vars file reorganization 2025-09-19 00:39:02.582204 | orchestrator | 2025-09-19 00:38:45 | INFO  | Moved 0 file(s) to their respective directories 2025-09-19 00:39:02.582221 | orchestrator | 2025-09-19 00:38:45 | INFO  | Group_vars file reorganization completed 2025-09-19 00:39:02.582231 | orchestrator | 2025-09-19 00:38:47 | INFO  | Starting variable preparation from inventory 2025-09-19 00:39:02.582241 | orchestrator | 2025-09-19 00:38:48 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-19 00:39:02.582252 | orchestrator | 2025-09-19 00:38:48 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-19 00:39:02.582262 | orchestrator | 2025-09-19 00:38:48 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-19 00:39:02.582287 | orchestrator | 2025-09-19 00:38:48 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-19 00:39:02.582297 | orchestrator | 2025-09-19 00:38:48 | INFO  | Variable preparation completed 2025-09-19 00:39:02.582307 | orchestrator | 2025-09-19 00:38:49 | INFO  | Starting inventory overwrite handling 2025-09-19 00:39:02.582317 | orchestrator | 2025-09-19 00:38:49 | INFO  | Handling group overwrites in 99-overwrite 2025-09-19 00:39:02.582327 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removing group frr:children from 60-generic 2025-09-19 00:39:02.582341 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removing group storage:children from 50-kolla 2025-09-19 00:39:02.582351 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-19 00:39:02.582361 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-19 00:39:02.582371 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-19 00:39:02.582381 | orchestrator | 2025-09-19 00:38:49 | INFO  | Handling group overwrites in 20-roles 2025-09-19 00:39:02.582391 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-19 00:39:02.582453 | orchestrator | 2025-09-19 00:38:49 | INFO  | Removed 6 group(s) in total 2025-09-19 00:39:02.582464 | orchestrator | 2025-09-19 00:38:49 | INFO  | Inventory overwrite handling completed 2025-09-19 00:39:02.582473 | orchestrator | 2025-09-19 00:38:50 | INFO  | Starting merge of inventory files 2025-09-19 00:39:02.582483 | orchestrator | 2025-09-19 00:38:50 | INFO  | Inventory files merged successfully 2025-09-19 00:39:02.582492 | orchestrator | 2025-09-19 00:38:54 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-19 00:39:02.582502 | orchestrator | 2025-09-19 00:39:01 | INFO  | Successfully wrote ClusterShell configuration 2025-09-19 00:39:02.582512 | orchestrator | [master ee67b11] 2025-09-19-00-39 2025-09-19 00:39:02.582522 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-19 00:39:04.608146 | orchestrator | 2025-09-19 00:39:04 | INFO  | Task cf27b636-994d-438c-8ac7-71b7074f3c51 (ceph-create-lvm-devices) was prepared for execution. 2025-09-19 00:39:04.608290 | orchestrator | 2025-09-19 00:39:04 | INFO  | It takes a moment until task cf27b636-994d-438c-8ac7-71b7074f3c51 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-19 00:39:15.488940 | orchestrator | 2025-09-19 00:39:15.489010 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 00:39:15.489022 | orchestrator | 2025-09-19 00:39:15.489031 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 00:39:15.489040 | orchestrator | Friday 19 September 2025 00:39:08 +0000 (0:00:00.336) 0:00:00.336 ****** 2025-09-19 00:39:15.489049 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 00:39:15.489058 | orchestrator | 2025-09-19 00:39:15.489067 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 00:39:15.489075 | orchestrator | Friday 19 September 2025 00:39:08 +0000 (0:00:00.231) 0:00:00.568 ****** 2025-09-19 00:39:15.489084 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:15.489093 | orchestrator | 2025-09-19 00:39:15.489102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489110 | orchestrator | Friday 19 September 2025 00:39:08 +0000 (0:00:00.217) 0:00:00.785 ****** 2025-09-19 00:39:15.489119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 00:39:15.489128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 00:39:15.489137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 00:39:15.489165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 00:39:15.489174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 00:39:15.489183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 00:39:15.489191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 00:39:15.489199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 00:39:15.489208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 00:39:15.489216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 00:39:15.489225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 00:39:15.489233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 00:39:15.489241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 00:39:15.489250 | orchestrator | 2025-09-19 00:39:15.489258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489284 | orchestrator | Friday 19 September 2025 00:39:09 +0000 (0:00:00.351) 0:00:01.137 ****** 2025-09-19 00:39:15.489293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489302 | orchestrator | 2025-09-19 00:39:15.489311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489319 | orchestrator | Friday 19 September 2025 00:39:09 +0000 (0:00:00.368) 0:00:01.505 ****** 2025-09-19 00:39:15.489328 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489336 | orchestrator | 2025-09-19 00:39:15.489345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489353 | orchestrator | Friday 19 September 2025 00:39:09 +0000 (0:00:00.228) 0:00:01.734 ****** 2025-09-19 00:39:15.489362 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489370 | orchestrator | 2025-09-19 00:39:15.489379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489388 | orchestrator | Friday 19 September 2025 00:39:09 +0000 (0:00:00.175) 0:00:01.909 ****** 2025-09-19 00:39:15.489396 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489405 | orchestrator | 2025-09-19 00:39:15.489413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489422 | orchestrator | Friday 19 September 2025 00:39:10 +0000 (0:00:00.181) 0:00:02.091 ****** 2025-09-19 00:39:15.489430 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489439 | orchestrator | 2025-09-19 00:39:15.489447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489456 | orchestrator | Friday 19 September 2025 00:39:10 +0000 (0:00:00.195) 0:00:02.286 ****** 2025-09-19 00:39:15.489464 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489473 | orchestrator | 2025-09-19 00:39:15.489481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489490 | orchestrator | Friday 19 September 2025 00:39:10 +0000 (0:00:00.187) 0:00:02.473 ****** 2025-09-19 00:39:15.489498 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489507 | orchestrator | 2025-09-19 00:39:15.489517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489528 | orchestrator | Friday 19 September 2025 00:39:10 +0000 (0:00:00.208) 0:00:02.682 ****** 2025-09-19 00:39:15.489538 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489547 | orchestrator | 2025-09-19 00:39:15.489557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489568 | orchestrator | Friday 19 September 2025 00:39:10 +0000 (0:00:00.187) 0:00:02.869 ****** 2025-09-19 00:39:15.489577 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13) 2025-09-19 00:39:15.489587 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13) 2025-09-19 00:39:15.489597 | orchestrator | 2025-09-19 00:39:15.489607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489618 | orchestrator | Friday 19 September 2025 00:39:11 +0000 (0:00:00.454) 0:00:03.324 ****** 2025-09-19 00:39:15.489640 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d) 2025-09-19 00:39:15.489652 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d) 2025-09-19 00:39:15.489662 | orchestrator | 2025-09-19 00:39:15.489672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489681 | orchestrator | Friday 19 September 2025 00:39:11 +0000 (0:00:00.428) 0:00:03.752 ****** 2025-09-19 00:39:15.489691 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f) 2025-09-19 00:39:15.489701 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f) 2025-09-19 00:39:15.489711 | orchestrator | 2025-09-19 00:39:15.489721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489737 | orchestrator | Friday 19 September 2025 00:39:12 +0000 (0:00:00.633) 0:00:04.386 ****** 2025-09-19 00:39:15.489747 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402) 2025-09-19 00:39:15.489756 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402) 2025-09-19 00:39:15.489766 | orchestrator | 2025-09-19 00:39:15.489776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:15.489785 | orchestrator | Friday 19 September 2025 00:39:12 +0000 (0:00:00.548) 0:00:04.934 ****** 2025-09-19 00:39:15.489795 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 00:39:15.489805 | orchestrator | 2025-09-19 00:39:15.489814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.489824 | orchestrator | Friday 19 September 2025 00:39:13 +0000 (0:00:00.597) 0:00:05.532 ****** 2025-09-19 00:39:15.489834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 00:39:15.489844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 00:39:15.489853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 00:39:15.489861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 00:39:15.489882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 00:39:15.489891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 00:39:15.489900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 00:39:15.489908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 00:39:15.489916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 00:39:15.489925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 00:39:15.489933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 00:39:15.489941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 00:39:15.489954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 00:39:15.489962 | orchestrator | 2025-09-19 00:39:15.489971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.489980 | orchestrator | Friday 19 September 2025 00:39:13 +0000 (0:00:00.401) 0:00:05.933 ****** 2025-09-19 00:39:15.489988 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.489997 | orchestrator | 2025-09-19 00:39:15.490005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490013 | orchestrator | Friday 19 September 2025 00:39:14 +0000 (0:00:00.181) 0:00:06.114 ****** 2025-09-19 00:39:15.490067 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.490076 | orchestrator | 2025-09-19 00:39:15.490085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490093 | orchestrator | Friday 19 September 2025 00:39:14 +0000 (0:00:00.207) 0:00:06.322 ****** 2025-09-19 00:39:15.490102 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.490110 | orchestrator | 2025-09-19 00:39:15.490119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490127 | orchestrator | Friday 19 September 2025 00:39:14 +0000 (0:00:00.199) 0:00:06.521 ****** 2025-09-19 00:39:15.490136 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.490164 | orchestrator | 2025-09-19 00:39:15.490174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490183 | orchestrator | Friday 19 September 2025 00:39:14 +0000 (0:00:00.170) 0:00:06.692 ****** 2025-09-19 00:39:15.490197 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.490206 | orchestrator | 2025-09-19 00:39:15.490214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490223 | orchestrator | Friday 19 September 2025 00:39:14 +0000 (0:00:00.182) 0:00:06.874 ****** 2025-09-19 00:39:15.490231 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.490240 | orchestrator | 2025-09-19 00:39:15.490248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490257 | orchestrator | Friday 19 September 2025 00:39:15 +0000 (0:00:00.178) 0:00:07.053 ****** 2025-09-19 00:39:15.490265 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:15.490273 | orchestrator | 2025-09-19 00:39:15.490282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:15.490291 | orchestrator | Friday 19 September 2025 00:39:15 +0000 (0:00:00.194) 0:00:07.248 ****** 2025-09-19 00:39:15.490306 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430384 | orchestrator | 2025-09-19 00:39:23.430483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:23.430498 | orchestrator | Friday 19 September 2025 00:39:15 +0000 (0:00:00.187) 0:00:07.436 ****** 2025-09-19 00:39:23.430509 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 00:39:23.430520 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 00:39:23.430531 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 00:39:23.430541 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 00:39:23.430550 | orchestrator | 2025-09-19 00:39:23.430560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:23.430570 | orchestrator | Friday 19 September 2025 00:39:16 +0000 (0:00:00.880) 0:00:08.316 ****** 2025-09-19 00:39:23.430580 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430589 | orchestrator | 2025-09-19 00:39:23.430599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:23.430609 | orchestrator | Friday 19 September 2025 00:39:16 +0000 (0:00:00.191) 0:00:08.508 ****** 2025-09-19 00:39:23.430618 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430628 | orchestrator | 2025-09-19 00:39:23.430638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:23.430648 | orchestrator | Friday 19 September 2025 00:39:16 +0000 (0:00:00.187) 0:00:08.695 ****** 2025-09-19 00:39:23.430657 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430667 | orchestrator | 2025-09-19 00:39:23.430676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:23.430687 | orchestrator | Friday 19 September 2025 00:39:16 +0000 (0:00:00.169) 0:00:08.865 ****** 2025-09-19 00:39:23.430697 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430706 | orchestrator | 2025-09-19 00:39:23.430716 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 00:39:23.430726 | orchestrator | Friday 19 September 2025 00:39:17 +0000 (0:00:00.194) 0:00:09.059 ****** 2025-09-19 00:39:23.430735 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430745 | orchestrator | 2025-09-19 00:39:23.430754 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 00:39:23.430764 | orchestrator | Friday 19 September 2025 00:39:17 +0000 (0:00:00.140) 0:00:09.200 ****** 2025-09-19 00:39:23.430774 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bc7aa585-dea2-57c4-a9fa-18818632dc3c'}}) 2025-09-19 00:39:23.430785 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba978b90-a663-5d0c-8f05-4b4e8986f79e'}}) 2025-09-19 00:39:23.430794 | orchestrator | 2025-09-19 00:39:23.430804 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 00:39:23.430814 | orchestrator | Friday 19 September 2025 00:39:17 +0000 (0:00:00.192) 0:00:09.392 ****** 2025-09-19 00:39:23.430825 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'}) 2025-09-19 00:39:23.430856 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'}) 2025-09-19 00:39:23.430866 | orchestrator | 2025-09-19 00:39:23.430875 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 00:39:23.430885 | orchestrator | Friday 19 September 2025 00:39:19 +0000 (0:00:02.061) 0:00:11.454 ****** 2025-09-19 00:39:23.430895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.430906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.430916 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.430925 | orchestrator | 2025-09-19 00:39:23.430938 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 00:39:23.430949 | orchestrator | Friday 19 September 2025 00:39:19 +0000 (0:00:00.157) 0:00:11.612 ****** 2025-09-19 00:39:23.430960 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'}) 2025-09-19 00:39:23.430971 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'}) 2025-09-19 00:39:23.430982 | orchestrator | 2025-09-19 00:39:23.430993 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 00:39:23.431004 | orchestrator | Friday 19 September 2025 00:39:21 +0000 (0:00:01.493) 0:00:13.106 ****** 2025-09-19 00:39:23.431015 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431038 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431049 | orchestrator | 2025-09-19 00:39:23.431060 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 00:39:23.431071 | orchestrator | Friday 19 September 2025 00:39:21 +0000 (0:00:00.139) 0:00:13.246 ****** 2025-09-19 00:39:23.431083 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431092 | orchestrator | 2025-09-19 00:39:23.431102 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 00:39:23.431127 | orchestrator | Friday 19 September 2025 00:39:21 +0000 (0:00:00.129) 0:00:13.375 ****** 2025-09-19 00:39:23.431138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431186 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431195 | orchestrator | 2025-09-19 00:39:23.431205 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 00:39:23.431215 | orchestrator | Friday 19 September 2025 00:39:21 +0000 (0:00:00.338) 0:00:13.714 ****** 2025-09-19 00:39:23.431224 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431234 | orchestrator | 2025-09-19 00:39:23.431244 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 00:39:23.431254 | orchestrator | Friday 19 September 2025 00:39:21 +0000 (0:00:00.136) 0:00:13.850 ****** 2025-09-19 00:39:23.431263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431290 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431300 | orchestrator | 2025-09-19 00:39:23.431310 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 00:39:23.431319 | orchestrator | Friday 19 September 2025 00:39:22 +0000 (0:00:00.165) 0:00:14.016 ****** 2025-09-19 00:39:23.431329 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431339 | orchestrator | 2025-09-19 00:39:23.431348 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 00:39:23.431358 | orchestrator | Friday 19 September 2025 00:39:22 +0000 (0:00:00.144) 0:00:14.160 ****** 2025-09-19 00:39:23.431368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431387 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431397 | orchestrator | 2025-09-19 00:39:23.431407 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 00:39:23.431416 | orchestrator | Friday 19 September 2025 00:39:22 +0000 (0:00:00.152) 0:00:14.313 ****** 2025-09-19 00:39:23.431426 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:23.431436 | orchestrator | 2025-09-19 00:39:23.431446 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 00:39:23.431455 | orchestrator | Friday 19 September 2025 00:39:22 +0000 (0:00:00.147) 0:00:14.460 ****** 2025-09-19 00:39:23.431482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431507 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431516 | orchestrator | 2025-09-19 00:39:23.431526 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 00:39:23.431536 | orchestrator | Friday 19 September 2025 00:39:22 +0000 (0:00:00.191) 0:00:14.652 ****** 2025-09-19 00:39:23.431545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431565 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431574 | orchestrator | 2025-09-19 00:39:23.431584 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 00:39:23.431594 | orchestrator | Friday 19 September 2025 00:39:22 +0000 (0:00:00.186) 0:00:14.838 ****** 2025-09-19 00:39:23.431603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:23.431613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:23.431623 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431632 | orchestrator | 2025-09-19 00:39:23.431642 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 00:39:23.431651 | orchestrator | Friday 19 September 2025 00:39:23 +0000 (0:00:00.198) 0:00:15.037 ****** 2025-09-19 00:39:23.431661 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431671 | orchestrator | 2025-09-19 00:39:23.431680 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 00:39:23.431696 | orchestrator | Friday 19 September 2025 00:39:23 +0000 (0:00:00.177) 0:00:15.214 ****** 2025-09-19 00:39:23.431706 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:23.431716 | orchestrator | 2025-09-19 00:39:23.431731 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 00:39:31.241570 | orchestrator | Friday 19 September 2025 00:39:23 +0000 (0:00:00.161) 0:00:15.376 ****** 2025-09-19 00:39:31.241697 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.241724 | orchestrator | 2025-09-19 00:39:31.241745 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 00:39:31.241765 | orchestrator | Friday 19 September 2025 00:39:23 +0000 (0:00:00.164) 0:00:15.540 ****** 2025-09-19 00:39:31.241785 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 00:39:31.241805 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 00:39:31.241826 | orchestrator | } 2025-09-19 00:39:31.241845 | orchestrator | 2025-09-19 00:39:31.241865 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 00:39:31.241885 | orchestrator | Friday 19 September 2025 00:39:23 +0000 (0:00:00.363) 0:00:15.904 ****** 2025-09-19 00:39:31.241905 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 00:39:31.241924 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 00:39:31.241942 | orchestrator | } 2025-09-19 00:39:31.241962 | orchestrator | 2025-09-19 00:39:31.241982 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 00:39:31.242001 | orchestrator | Friday 19 September 2025 00:39:24 +0000 (0:00:00.194) 0:00:16.098 ****** 2025-09-19 00:39:31.242091 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 00:39:31.242117 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 00:39:31.242167 | orchestrator | } 2025-09-19 00:39:31.242187 | orchestrator | 2025-09-19 00:39:31.242208 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 00:39:31.242228 | orchestrator | Friday 19 September 2025 00:39:24 +0000 (0:00:00.185) 0:00:16.284 ****** 2025-09-19 00:39:31.242247 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:31.242267 | orchestrator | 2025-09-19 00:39:31.242287 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 00:39:31.242308 | orchestrator | Friday 19 September 2025 00:39:25 +0000 (0:00:00.874) 0:00:17.158 ****** 2025-09-19 00:39:31.242327 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:31.242346 | orchestrator | 2025-09-19 00:39:31.242366 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 00:39:31.242385 | orchestrator | Friday 19 September 2025 00:39:25 +0000 (0:00:00.538) 0:00:17.696 ****** 2025-09-19 00:39:31.242405 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:31.242423 | orchestrator | 2025-09-19 00:39:31.242443 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 00:39:31.242463 | orchestrator | Friday 19 September 2025 00:39:26 +0000 (0:00:00.521) 0:00:18.218 ****** 2025-09-19 00:39:31.242481 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:31.242500 | orchestrator | 2025-09-19 00:39:31.242518 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 00:39:31.242536 | orchestrator | Friday 19 September 2025 00:39:26 +0000 (0:00:00.136) 0:00:18.354 ****** 2025-09-19 00:39:31.242554 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.242572 | orchestrator | 2025-09-19 00:39:31.242590 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 00:39:31.242608 | orchestrator | Friday 19 September 2025 00:39:26 +0000 (0:00:00.147) 0:00:18.501 ****** 2025-09-19 00:39:31.242627 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.242646 | orchestrator | 2025-09-19 00:39:31.242665 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 00:39:31.242684 | orchestrator | Friday 19 September 2025 00:39:26 +0000 (0:00:00.111) 0:00:18.613 ****** 2025-09-19 00:39:31.242703 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 00:39:31.242722 | orchestrator |  "vgs_report": { 2025-09-19 00:39:31.242777 | orchestrator |  "vg": [] 2025-09-19 00:39:31.242797 | orchestrator |  } 2025-09-19 00:39:31.242816 | orchestrator | } 2025-09-19 00:39:31.242835 | orchestrator | 2025-09-19 00:39:31.242873 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 00:39:31.242892 | orchestrator | Friday 19 September 2025 00:39:26 +0000 (0:00:00.153) 0:00:18.766 ****** 2025-09-19 00:39:31.242911 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.242929 | orchestrator | 2025-09-19 00:39:31.242948 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 00:39:31.242966 | orchestrator | Friday 19 September 2025 00:39:26 +0000 (0:00:00.136) 0:00:18.903 ****** 2025-09-19 00:39:31.242985 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243004 | orchestrator | 2025-09-19 00:39:31.243023 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 00:39:31.243041 | orchestrator | Friday 19 September 2025 00:39:27 +0000 (0:00:00.155) 0:00:19.059 ****** 2025-09-19 00:39:31.243060 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243078 | orchestrator | 2025-09-19 00:39:31.243097 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 00:39:31.243116 | orchestrator | Friday 19 September 2025 00:39:27 +0000 (0:00:00.363) 0:00:19.422 ****** 2025-09-19 00:39:31.243177 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243198 | orchestrator | 2025-09-19 00:39:31.243216 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 00:39:31.243235 | orchestrator | Friday 19 September 2025 00:39:27 +0000 (0:00:00.133) 0:00:19.555 ****** 2025-09-19 00:39:31.243253 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243272 | orchestrator | 2025-09-19 00:39:31.243291 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 00:39:31.243309 | orchestrator | Friday 19 September 2025 00:39:27 +0000 (0:00:00.135) 0:00:19.691 ****** 2025-09-19 00:39:31.243328 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243345 | orchestrator | 2025-09-19 00:39:31.243364 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 00:39:31.243382 | orchestrator | Friday 19 September 2025 00:39:27 +0000 (0:00:00.128) 0:00:19.819 ****** 2025-09-19 00:39:31.243400 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243417 | orchestrator | 2025-09-19 00:39:31.243435 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 00:39:31.243453 | orchestrator | Friday 19 September 2025 00:39:27 +0000 (0:00:00.114) 0:00:19.934 ****** 2025-09-19 00:39:31.243471 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243489 | orchestrator | 2025-09-19 00:39:31.243506 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 00:39:31.243556 | orchestrator | Friday 19 September 2025 00:39:28 +0000 (0:00:00.123) 0:00:20.057 ****** 2025-09-19 00:39:31.243575 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243592 | orchestrator | 2025-09-19 00:39:31.243610 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 00:39:31.243627 | orchestrator | Friday 19 September 2025 00:39:28 +0000 (0:00:00.195) 0:00:20.252 ****** 2025-09-19 00:39:31.243644 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243663 | orchestrator | 2025-09-19 00:39:31.243680 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 00:39:31.243698 | orchestrator | Friday 19 September 2025 00:39:28 +0000 (0:00:00.215) 0:00:20.468 ****** 2025-09-19 00:39:31.243716 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243734 | orchestrator | 2025-09-19 00:39:31.243753 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 00:39:31.243772 | orchestrator | Friday 19 September 2025 00:39:28 +0000 (0:00:00.217) 0:00:20.685 ****** 2025-09-19 00:39:31.243791 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243809 | orchestrator | 2025-09-19 00:39:31.243828 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 00:39:31.243867 | orchestrator | Friday 19 September 2025 00:39:28 +0000 (0:00:00.189) 0:00:20.875 ****** 2025-09-19 00:39:31.243886 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243904 | orchestrator | 2025-09-19 00:39:31.243923 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 00:39:31.243942 | orchestrator | Friday 19 September 2025 00:39:29 +0000 (0:00:00.202) 0:00:21.078 ****** 2025-09-19 00:39:31.243960 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.243979 | orchestrator | 2025-09-19 00:39:31.243998 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 00:39:31.244017 | orchestrator | Friday 19 September 2025 00:39:29 +0000 (0:00:00.169) 0:00:21.247 ****** 2025-09-19 00:39:31.244037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:31.244057 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:31.244076 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.244093 | orchestrator | 2025-09-19 00:39:31.244111 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 00:39:31.244130 | orchestrator | Friday 19 September 2025 00:39:29 +0000 (0:00:00.211) 0:00:21.459 ****** 2025-09-19 00:39:31.244226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:31.244245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:31.244263 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.244281 | orchestrator | 2025-09-19 00:39:31.244299 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 00:39:31.244318 | orchestrator | Friday 19 September 2025 00:39:30 +0000 (0:00:00.751) 0:00:22.211 ****** 2025-09-19 00:39:31.244337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:31.244356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:31.244373 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.244391 | orchestrator | 2025-09-19 00:39:31.244411 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 00:39:31.244429 | orchestrator | Friday 19 September 2025 00:39:30 +0000 (0:00:00.334) 0:00:22.545 ****** 2025-09-19 00:39:31.244449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:31.244468 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:31.244487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.244505 | orchestrator | 2025-09-19 00:39:31.244523 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 00:39:31.244541 | orchestrator | Friday 19 September 2025 00:39:30 +0000 (0:00:00.205) 0:00:22.751 ****** 2025-09-19 00:39:31.244558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:31.244576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:31.244594 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:31.244611 | orchestrator | 2025-09-19 00:39:31.244629 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 00:39:31.244666 | orchestrator | Friday 19 September 2025 00:39:31 +0000 (0:00:00.237) 0:00:22.989 ****** 2025-09-19 00:39:31.244705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:31.244746 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:36.830963 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:36.831073 | orchestrator | 2025-09-19 00:39:36.831090 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 00:39:36.831103 | orchestrator | Friday 19 September 2025 00:39:31 +0000 (0:00:00.200) 0:00:23.189 ****** 2025-09-19 00:39:36.831115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:36.831127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:36.831166 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:36.831177 | orchestrator | 2025-09-19 00:39:36.831188 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 00:39:36.831199 | orchestrator | Friday 19 September 2025 00:39:31 +0000 (0:00:00.166) 0:00:23.356 ****** 2025-09-19 00:39:36.831210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:36.831221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:36.831232 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:36.831243 | orchestrator | 2025-09-19 00:39:36.831255 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 00:39:36.831266 | orchestrator | Friday 19 September 2025 00:39:31 +0000 (0:00:00.263) 0:00:23.620 ****** 2025-09-19 00:39:36.831277 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:36.831289 | orchestrator | 2025-09-19 00:39:36.831299 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 00:39:36.831311 | orchestrator | Friday 19 September 2025 00:39:32 +0000 (0:00:00.574) 0:00:24.194 ****** 2025-09-19 00:39:36.831321 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:36.831332 | orchestrator | 2025-09-19 00:39:36.831343 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 00:39:36.831354 | orchestrator | Friday 19 September 2025 00:39:32 +0000 (0:00:00.562) 0:00:24.757 ****** 2025-09-19 00:39:36.831365 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:39:36.831375 | orchestrator | 2025-09-19 00:39:36.831386 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 00:39:36.831397 | orchestrator | Friday 19 September 2025 00:39:32 +0000 (0:00:00.190) 0:00:24.948 ****** 2025-09-19 00:39:36.831408 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'vg_name': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'}) 2025-09-19 00:39:36.831420 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'vg_name': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'}) 2025-09-19 00:39:36.831431 | orchestrator | 2025-09-19 00:39:36.831459 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 00:39:36.831471 | orchestrator | Friday 19 September 2025 00:39:33 +0000 (0:00:00.159) 0:00:25.107 ****** 2025-09-19 00:39:36.831482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:36.831493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:36.831527 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:36.831540 | orchestrator | 2025-09-19 00:39:36.831552 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 00:39:36.831565 | orchestrator | Friday 19 September 2025 00:39:33 +0000 (0:00:00.158) 0:00:25.266 ****** 2025-09-19 00:39:36.831578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:36.831590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:36.831603 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:36.831615 | orchestrator | 2025-09-19 00:39:36.831627 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 00:39:36.831640 | orchestrator | Friday 19 September 2025 00:39:33 +0000 (0:00:00.354) 0:00:25.620 ****** 2025-09-19 00:39:36.831652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'})  2025-09-19 00:39:36.831665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'})  2025-09-19 00:39:36.831677 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:39:36.831690 | orchestrator | 2025-09-19 00:39:36.831703 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 00:39:36.831716 | orchestrator | Friday 19 September 2025 00:39:33 +0000 (0:00:00.160) 0:00:25.780 ****** 2025-09-19 00:39:36.831729 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 00:39:36.831741 | orchestrator |  "lvm_report": { 2025-09-19 00:39:36.831755 | orchestrator |  "lv": [ 2025-09-19 00:39:36.831767 | orchestrator |  { 2025-09-19 00:39:36.831797 | orchestrator |  "lv_name": "osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e", 2025-09-19 00:39:36.831811 | orchestrator |  "vg_name": "ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e" 2025-09-19 00:39:36.831824 | orchestrator |  }, 2025-09-19 00:39:36.831836 | orchestrator |  { 2025-09-19 00:39:36.831849 | orchestrator |  "lv_name": "osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c", 2025-09-19 00:39:36.831861 | orchestrator |  "vg_name": "ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c" 2025-09-19 00:39:36.831875 | orchestrator |  } 2025-09-19 00:39:36.831886 | orchestrator |  ], 2025-09-19 00:39:36.831897 | orchestrator |  "pv": [ 2025-09-19 00:39:36.831907 | orchestrator |  { 2025-09-19 00:39:36.831918 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 00:39:36.831929 | orchestrator |  "vg_name": "ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c" 2025-09-19 00:39:36.831939 | orchestrator |  }, 2025-09-19 00:39:36.831950 | orchestrator |  { 2025-09-19 00:39:36.831961 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 00:39:36.831971 | orchestrator |  "vg_name": "ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e" 2025-09-19 00:39:36.831982 | orchestrator |  } 2025-09-19 00:39:36.831993 | orchestrator |  ] 2025-09-19 00:39:36.832004 | orchestrator |  } 2025-09-19 00:39:36.832014 | orchestrator | } 2025-09-19 00:39:36.832026 | orchestrator | 2025-09-19 00:39:36.832036 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 00:39:36.832047 | orchestrator | 2025-09-19 00:39:36.832058 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 00:39:36.832069 | orchestrator | Friday 19 September 2025 00:39:34 +0000 (0:00:00.296) 0:00:26.077 ****** 2025-09-19 00:39:36.832080 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 00:39:36.832091 | orchestrator | 2025-09-19 00:39:36.832110 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 00:39:36.832121 | orchestrator | Friday 19 September 2025 00:39:34 +0000 (0:00:00.240) 0:00:26.317 ****** 2025-09-19 00:39:36.832155 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:36.832166 | orchestrator | 2025-09-19 00:39:36.832177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832188 | orchestrator | Friday 19 September 2025 00:39:34 +0000 (0:00:00.231) 0:00:26.548 ****** 2025-09-19 00:39:36.832198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 00:39:36.832209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 00:39:36.832220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 00:39:36.832230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 00:39:36.832241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 00:39:36.832251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 00:39:36.832262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 00:39:36.832273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 00:39:36.832289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 00:39:36.832299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 00:39:36.832310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 00:39:36.832321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 00:39:36.832332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 00:39:36.832342 | orchestrator | 2025-09-19 00:39:36.832353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832363 | orchestrator | Friday 19 September 2025 00:39:34 +0000 (0:00:00.401) 0:00:26.949 ****** 2025-09-19 00:39:36.832374 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832385 | orchestrator | 2025-09-19 00:39:36.832395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832406 | orchestrator | Friday 19 September 2025 00:39:35 +0000 (0:00:00.217) 0:00:27.167 ****** 2025-09-19 00:39:36.832417 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832427 | orchestrator | 2025-09-19 00:39:36.832438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832449 | orchestrator | Friday 19 September 2025 00:39:35 +0000 (0:00:00.194) 0:00:27.361 ****** 2025-09-19 00:39:36.832459 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832470 | orchestrator | 2025-09-19 00:39:36.832480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832491 | orchestrator | Friday 19 September 2025 00:39:35 +0000 (0:00:00.194) 0:00:27.556 ****** 2025-09-19 00:39:36.832502 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832512 | orchestrator | 2025-09-19 00:39:36.832523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832534 | orchestrator | Friday 19 September 2025 00:39:36 +0000 (0:00:00.592) 0:00:28.148 ****** 2025-09-19 00:39:36.832544 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832555 | orchestrator | 2025-09-19 00:39:36.832565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832576 | orchestrator | Friday 19 September 2025 00:39:36 +0000 (0:00:00.191) 0:00:28.340 ****** 2025-09-19 00:39:36.832587 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832597 | orchestrator | 2025-09-19 00:39:36.832608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:36.832627 | orchestrator | Friday 19 September 2025 00:39:36 +0000 (0:00:00.239) 0:00:28.579 ****** 2025-09-19 00:39:36.832638 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:36.832649 | orchestrator | 2025-09-19 00:39:36.832667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:47.830491 | orchestrator | Friday 19 September 2025 00:39:36 +0000 (0:00:00.199) 0:00:28.779 ****** 2025-09-19 00:39:47.830610 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.830624 | orchestrator | 2025-09-19 00:39:47.830635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:47.830646 | orchestrator | Friday 19 September 2025 00:39:37 +0000 (0:00:00.205) 0:00:28.984 ****** 2025-09-19 00:39:47.830656 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9) 2025-09-19 00:39:47.830668 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9) 2025-09-19 00:39:47.830677 | orchestrator | 2025-09-19 00:39:47.830687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:47.830696 | orchestrator | Friday 19 September 2025 00:39:37 +0000 (0:00:00.422) 0:00:29.407 ****** 2025-09-19 00:39:47.830706 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3) 2025-09-19 00:39:47.830715 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3) 2025-09-19 00:39:47.830725 | orchestrator | 2025-09-19 00:39:47.830734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:47.830744 | orchestrator | Friday 19 September 2025 00:39:37 +0000 (0:00:00.455) 0:00:29.863 ****** 2025-09-19 00:39:47.830753 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521) 2025-09-19 00:39:47.830763 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521) 2025-09-19 00:39:47.830772 | orchestrator | 2025-09-19 00:39:47.830782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:47.830791 | orchestrator | Friday 19 September 2025 00:39:38 +0000 (0:00:00.472) 0:00:30.335 ****** 2025-09-19 00:39:47.830801 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe) 2025-09-19 00:39:47.830810 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe) 2025-09-19 00:39:47.830820 | orchestrator | 2025-09-19 00:39:47.830829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:39:47.830839 | orchestrator | Friday 19 September 2025 00:39:38 +0000 (0:00:00.485) 0:00:30.821 ****** 2025-09-19 00:39:47.830848 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 00:39:47.830858 | orchestrator | 2025-09-19 00:39:47.830867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.830877 | orchestrator | Friday 19 September 2025 00:39:39 +0000 (0:00:00.405) 0:00:31.227 ****** 2025-09-19 00:39:47.830886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 00:39:47.830898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 00:39:47.830907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 00:39:47.830917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 00:39:47.830927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 00:39:47.830936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 00:39:47.830968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 00:39:47.831010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 00:39:47.831022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 00:39:47.831033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 00:39:47.831043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 00:39:47.831053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 00:39:47.831065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 00:39:47.831076 | orchestrator | 2025-09-19 00:39:47.831087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831098 | orchestrator | Friday 19 September 2025 00:39:39 +0000 (0:00:00.622) 0:00:31.849 ****** 2025-09-19 00:39:47.831110 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831121 | orchestrator | 2025-09-19 00:39:47.831160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831171 | orchestrator | Friday 19 September 2025 00:39:40 +0000 (0:00:00.255) 0:00:32.105 ****** 2025-09-19 00:39:47.831183 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831193 | orchestrator | 2025-09-19 00:39:47.831206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831217 | orchestrator | Friday 19 September 2025 00:39:40 +0000 (0:00:00.236) 0:00:32.342 ****** 2025-09-19 00:39:47.831228 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831239 | orchestrator | 2025-09-19 00:39:47.831250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831261 | orchestrator | Friday 19 September 2025 00:39:40 +0000 (0:00:00.278) 0:00:32.620 ****** 2025-09-19 00:39:47.831272 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831283 | orchestrator | 2025-09-19 00:39:47.831313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831325 | orchestrator | Friday 19 September 2025 00:39:40 +0000 (0:00:00.307) 0:00:32.927 ****** 2025-09-19 00:39:47.831337 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831348 | orchestrator | 2025-09-19 00:39:47.831358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831367 | orchestrator | Friday 19 September 2025 00:39:41 +0000 (0:00:00.214) 0:00:33.142 ****** 2025-09-19 00:39:47.831376 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831386 | orchestrator | 2025-09-19 00:39:47.831395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831405 | orchestrator | Friday 19 September 2025 00:39:41 +0000 (0:00:00.205) 0:00:33.347 ****** 2025-09-19 00:39:47.831414 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831424 | orchestrator | 2025-09-19 00:39:47.831433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831443 | orchestrator | Friday 19 September 2025 00:39:41 +0000 (0:00:00.196) 0:00:33.543 ****** 2025-09-19 00:39:47.831452 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831462 | orchestrator | 2025-09-19 00:39:47.831471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831481 | orchestrator | Friday 19 September 2025 00:39:41 +0000 (0:00:00.200) 0:00:33.744 ****** 2025-09-19 00:39:47.831491 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 00:39:47.831500 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 00:39:47.831510 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 00:39:47.831520 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 00:39:47.831529 | orchestrator | 2025-09-19 00:39:47.831539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831549 | orchestrator | Friday 19 September 2025 00:39:42 +0000 (0:00:00.838) 0:00:34.583 ****** 2025-09-19 00:39:47.831568 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831578 | orchestrator | 2025-09-19 00:39:47.831588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831598 | orchestrator | Friday 19 September 2025 00:39:42 +0000 (0:00:00.189) 0:00:34.772 ****** 2025-09-19 00:39:47.831607 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831616 | orchestrator | 2025-09-19 00:39:47.831626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831635 | orchestrator | Friday 19 September 2025 00:39:43 +0000 (0:00:00.201) 0:00:34.974 ****** 2025-09-19 00:39:47.831644 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831654 | orchestrator | 2025-09-19 00:39:47.831663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:39:47.831673 | orchestrator | Friday 19 September 2025 00:39:43 +0000 (0:00:00.650) 0:00:35.624 ****** 2025-09-19 00:39:47.831682 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831691 | orchestrator | 2025-09-19 00:39:47.831701 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 00:39:47.831710 | orchestrator | Friday 19 September 2025 00:39:43 +0000 (0:00:00.243) 0:00:35.868 ****** 2025-09-19 00:39:47.831720 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831729 | orchestrator | 2025-09-19 00:39:47.831744 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 00:39:47.831754 | orchestrator | Friday 19 September 2025 00:39:44 +0000 (0:00:00.158) 0:00:36.027 ****** 2025-09-19 00:39:47.831763 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7c9f8b51-166c-5055-bfcb-65abe80d3110'}}) 2025-09-19 00:39:47.831773 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25e4de26-ffd2-5ba5-a3e7-287c918a347b'}}) 2025-09-19 00:39:47.831783 | orchestrator | 2025-09-19 00:39:47.831792 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 00:39:47.831802 | orchestrator | Friday 19 September 2025 00:39:44 +0000 (0:00:00.220) 0:00:36.247 ****** 2025-09-19 00:39:47.831812 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'}) 2025-09-19 00:39:47.831824 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'}) 2025-09-19 00:39:47.831833 | orchestrator | 2025-09-19 00:39:47.831843 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 00:39:47.831853 | orchestrator | Friday 19 September 2025 00:39:46 +0000 (0:00:01.899) 0:00:38.147 ****** 2025-09-19 00:39:47.831862 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:47.831873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:47.831882 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:47.831892 | orchestrator | 2025-09-19 00:39:47.831901 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 00:39:47.831911 | orchestrator | Friday 19 September 2025 00:39:46 +0000 (0:00:00.159) 0:00:38.307 ****** 2025-09-19 00:39:47.831920 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'}) 2025-09-19 00:39:47.831930 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'}) 2025-09-19 00:39:47.831939 | orchestrator | 2025-09-19 00:39:47.831954 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 00:39:53.410504 | orchestrator | Friday 19 September 2025 00:39:47 +0000 (0:00:01.463) 0:00:39.770 ****** 2025-09-19 00:39:53.410668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.410686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.410698 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.410710 | orchestrator | 2025-09-19 00:39:53.410723 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 00:39:53.410734 | orchestrator | Friday 19 September 2025 00:39:47 +0000 (0:00:00.157) 0:00:39.928 ****** 2025-09-19 00:39:53.410745 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.410756 | orchestrator | 2025-09-19 00:39:53.410767 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 00:39:53.410778 | orchestrator | Friday 19 September 2025 00:39:48 +0000 (0:00:00.141) 0:00:40.070 ****** 2025-09-19 00:39:53.410789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.410801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.410812 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.410822 | orchestrator | 2025-09-19 00:39:53.410833 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 00:39:53.410844 | orchestrator | Friday 19 September 2025 00:39:48 +0000 (0:00:00.152) 0:00:40.223 ****** 2025-09-19 00:39:53.410854 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.410865 | orchestrator | 2025-09-19 00:39:53.410876 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 00:39:53.410886 | orchestrator | Friday 19 September 2025 00:39:48 +0000 (0:00:00.156) 0:00:40.379 ****** 2025-09-19 00:39:53.410897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.410908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.410919 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.410930 | orchestrator | 2025-09-19 00:39:53.410941 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 00:39:53.410952 | orchestrator | Friday 19 September 2025 00:39:48 +0000 (0:00:00.158) 0:00:40.537 ****** 2025-09-19 00:39:53.410962 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.410973 | orchestrator | 2025-09-19 00:39:53.411001 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 00:39:53.411015 | orchestrator | Friday 19 September 2025 00:39:48 +0000 (0:00:00.343) 0:00:40.880 ****** 2025-09-19 00:39:53.411028 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.411041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.411053 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411066 | orchestrator | 2025-09-19 00:39:53.411079 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 00:39:53.411091 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.182) 0:00:41.063 ****** 2025-09-19 00:39:53.411103 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:53.411117 | orchestrator | 2025-09-19 00:39:53.411166 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 00:39:53.411179 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.125) 0:00:41.189 ****** 2025-09-19 00:39:53.411203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.411216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.411229 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411242 | orchestrator | 2025-09-19 00:39:53.411254 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 00:39:53.411266 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.143) 0:00:41.333 ****** 2025-09-19 00:39:53.411278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.411290 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.411302 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411315 | orchestrator | 2025-09-19 00:39:53.411327 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 00:39:53.411339 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.146) 0:00:41.479 ****** 2025-09-19 00:39:53.411370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:53.411382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:53.411393 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411404 | orchestrator | 2025-09-19 00:39:53.411415 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 00:39:53.411426 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.156) 0:00:41.636 ****** 2025-09-19 00:39:53.411437 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411448 | orchestrator | 2025-09-19 00:39:53.411458 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 00:39:53.411469 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.166) 0:00:41.802 ****** 2025-09-19 00:39:53.411480 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411490 | orchestrator | 2025-09-19 00:39:53.411501 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 00:39:53.411512 | orchestrator | Friday 19 September 2025 00:39:49 +0000 (0:00:00.124) 0:00:41.927 ****** 2025-09-19 00:39:53.411523 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.411534 | orchestrator | 2025-09-19 00:39:53.411544 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 00:39:53.411555 | orchestrator | Friday 19 September 2025 00:39:50 +0000 (0:00:00.150) 0:00:42.077 ****** 2025-09-19 00:39:53.411566 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 00:39:53.411577 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 00:39:53.411589 | orchestrator | } 2025-09-19 00:39:53.411600 | orchestrator | 2025-09-19 00:39:53.411611 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 00:39:53.411622 | orchestrator | Friday 19 September 2025 00:39:50 +0000 (0:00:00.139) 0:00:42.217 ****** 2025-09-19 00:39:53.411633 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 00:39:53.411643 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 00:39:53.411654 | orchestrator | } 2025-09-19 00:39:53.411665 | orchestrator | 2025-09-19 00:39:53.411676 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 00:39:53.411687 | orchestrator | Friday 19 September 2025 00:39:50 +0000 (0:00:00.131) 0:00:42.349 ****** 2025-09-19 00:39:53.411697 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 00:39:53.411708 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 00:39:53.411720 | orchestrator | } 2025-09-19 00:39:53.411743 | orchestrator | 2025-09-19 00:39:53.411754 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 00:39:53.411765 | orchestrator | Friday 19 September 2025 00:39:50 +0000 (0:00:00.128) 0:00:42.477 ****** 2025-09-19 00:39:53.411776 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:53.411786 | orchestrator | 2025-09-19 00:39:53.411797 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 00:39:53.411810 | orchestrator | Friday 19 September 2025 00:39:51 +0000 (0:00:00.733) 0:00:43.211 ****** 2025-09-19 00:39:53.411828 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:53.411847 | orchestrator | 2025-09-19 00:39:53.411866 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 00:39:53.411881 | orchestrator | Friday 19 September 2025 00:39:51 +0000 (0:00:00.500) 0:00:43.712 ****** 2025-09-19 00:39:53.411897 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:53.411914 | orchestrator | 2025-09-19 00:39:53.411933 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 00:39:53.411953 | orchestrator | Friday 19 September 2025 00:39:52 +0000 (0:00:00.581) 0:00:44.293 ****** 2025-09-19 00:39:53.411970 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:53.411989 | orchestrator | 2025-09-19 00:39:53.412001 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 00:39:53.412012 | orchestrator | Friday 19 September 2025 00:39:52 +0000 (0:00:00.150) 0:00:44.443 ****** 2025-09-19 00:39:53.412023 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.412033 | orchestrator | 2025-09-19 00:39:53.412044 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 00:39:53.412065 | orchestrator | Friday 19 September 2025 00:39:52 +0000 (0:00:00.111) 0:00:44.555 ****** 2025-09-19 00:39:53.412076 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.412087 | orchestrator | 2025-09-19 00:39:53.412098 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 00:39:53.412109 | orchestrator | Friday 19 September 2025 00:39:52 +0000 (0:00:00.107) 0:00:44.663 ****** 2025-09-19 00:39:53.412142 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 00:39:53.412154 | orchestrator |  "vgs_report": { 2025-09-19 00:39:53.412166 | orchestrator |  "vg": [] 2025-09-19 00:39:53.412177 | orchestrator |  } 2025-09-19 00:39:53.412188 | orchestrator | } 2025-09-19 00:39:53.412199 | orchestrator | 2025-09-19 00:39:53.412209 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 00:39:53.412220 | orchestrator | Friday 19 September 2025 00:39:52 +0000 (0:00:00.146) 0:00:44.810 ****** 2025-09-19 00:39:53.412231 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.412241 | orchestrator | 2025-09-19 00:39:53.412252 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 00:39:53.412263 | orchestrator | Friday 19 September 2025 00:39:52 +0000 (0:00:00.134) 0:00:44.944 ****** 2025-09-19 00:39:53.412273 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.412284 | orchestrator | 2025-09-19 00:39:53.412295 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 00:39:53.412306 | orchestrator | Friday 19 September 2025 00:39:53 +0000 (0:00:00.140) 0:00:45.085 ****** 2025-09-19 00:39:53.412316 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.412327 | orchestrator | 2025-09-19 00:39:53.412338 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 00:39:53.412348 | orchestrator | Friday 19 September 2025 00:39:53 +0000 (0:00:00.140) 0:00:45.226 ****** 2025-09-19 00:39:53.412359 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:53.412370 | orchestrator | 2025-09-19 00:39:53.412381 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 00:39:53.412400 | orchestrator | Friday 19 September 2025 00:39:53 +0000 (0:00:00.127) 0:00:45.354 ****** 2025-09-19 00:39:58.020354 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020471 | orchestrator | 2025-09-19 00:39:58.020499 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 00:39:58.020552 | orchestrator | Friday 19 September 2025 00:39:53 +0000 (0:00:00.137) 0:00:45.491 ****** 2025-09-19 00:39:58.020575 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020595 | orchestrator | 2025-09-19 00:39:58.020608 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 00:39:58.020619 | orchestrator | Friday 19 September 2025 00:39:53 +0000 (0:00:00.346) 0:00:45.837 ****** 2025-09-19 00:39:58.020630 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020641 | orchestrator | 2025-09-19 00:39:58.020652 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 00:39:58.020662 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.146) 0:00:45.984 ****** 2025-09-19 00:39:58.020673 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020684 | orchestrator | 2025-09-19 00:39:58.020695 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 00:39:58.020706 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.149) 0:00:46.133 ****** 2025-09-19 00:39:58.020717 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020728 | orchestrator | 2025-09-19 00:39:58.020738 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 00:39:58.020749 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.138) 0:00:46.271 ****** 2025-09-19 00:39:58.020760 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020771 | orchestrator | 2025-09-19 00:39:58.020781 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 00:39:58.020792 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.137) 0:00:46.409 ****** 2025-09-19 00:39:58.020803 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020814 | orchestrator | 2025-09-19 00:39:58.020824 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 00:39:58.020835 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.139) 0:00:46.549 ****** 2025-09-19 00:39:58.020846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020857 | orchestrator | 2025-09-19 00:39:58.020867 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 00:39:58.020878 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.155) 0:00:46.704 ****** 2025-09-19 00:39:58.020892 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020904 | orchestrator | 2025-09-19 00:39:58.020917 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 00:39:58.020929 | orchestrator | Friday 19 September 2025 00:39:54 +0000 (0:00:00.148) 0:00:46.852 ****** 2025-09-19 00:39:58.020942 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.020954 | orchestrator | 2025-09-19 00:39:58.020967 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 00:39:58.020980 | orchestrator | Friday 19 September 2025 00:39:55 +0000 (0:00:00.144) 0:00:46.997 ****** 2025-09-19 00:39:58.021007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021032 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021043 | orchestrator | 2025-09-19 00:39:58.021055 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 00:39:58.021065 | orchestrator | Friday 19 September 2025 00:39:55 +0000 (0:00:00.162) 0:00:47.159 ****** 2025-09-19 00:39:58.021076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021087 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021105 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021141 | orchestrator | 2025-09-19 00:39:58.021154 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 00:39:58.021165 | orchestrator | Friday 19 September 2025 00:39:55 +0000 (0:00:00.153) 0:00:47.312 ****** 2025-09-19 00:39:58.021176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021198 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021208 | orchestrator | 2025-09-19 00:39:58.021219 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 00:39:58.021230 | orchestrator | Friday 19 September 2025 00:39:55 +0000 (0:00:00.160) 0:00:47.473 ****** 2025-09-19 00:39:58.021240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021262 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021273 | orchestrator | 2025-09-19 00:39:58.021283 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 00:39:58.021312 | orchestrator | Friday 19 September 2025 00:39:55 +0000 (0:00:00.326) 0:00:47.799 ****** 2025-09-19 00:39:58.021324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021345 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021356 | orchestrator | 2025-09-19 00:39:58.021367 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 00:39:58.021378 | orchestrator | Friday 19 September 2025 00:39:55 +0000 (0:00:00.140) 0:00:47.940 ****** 2025-09-19 00:39:58.021388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021410 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021420 | orchestrator | 2025-09-19 00:39:58.021432 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 00:39:58.021443 | orchestrator | Friday 19 September 2025 00:39:56 +0000 (0:00:00.142) 0:00:48.082 ****** 2025-09-19 00:39:58.021454 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021475 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021486 | orchestrator | 2025-09-19 00:39:58.021496 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 00:39:58.021507 | orchestrator | Friday 19 September 2025 00:39:56 +0000 (0:00:00.138) 0:00:48.221 ****** 2025-09-19 00:39:58.021518 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021546 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021556 | orchestrator | 2025-09-19 00:39:58.021573 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 00:39:58.021613 | orchestrator | Friday 19 September 2025 00:39:56 +0000 (0:00:00.141) 0:00:48.362 ****** 2025-09-19 00:39:58.021632 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:58.021650 | orchestrator | 2025-09-19 00:39:58.021668 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 00:39:58.021687 | orchestrator | Friday 19 September 2025 00:39:56 +0000 (0:00:00.494) 0:00:48.856 ****** 2025-09-19 00:39:58.021705 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:58.021723 | orchestrator | 2025-09-19 00:39:58.021741 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 00:39:58.021760 | orchestrator | Friday 19 September 2025 00:39:57 +0000 (0:00:00.515) 0:00:49.372 ****** 2025-09-19 00:39:58.021778 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:39:58.021798 | orchestrator | 2025-09-19 00:39:58.021817 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 00:39:58.021837 | orchestrator | Friday 19 September 2025 00:39:57 +0000 (0:00:00.144) 0:00:49.516 ****** 2025-09-19 00:39:58.021851 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'vg_name': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'}) 2025-09-19 00:39:58.021863 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'vg_name': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'}) 2025-09-19 00:39:58.021873 | orchestrator | 2025-09-19 00:39:58.021884 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 00:39:58.021895 | orchestrator | Friday 19 September 2025 00:39:57 +0000 (0:00:00.163) 0:00:49.679 ****** 2025-09-19 00:39:58.021905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.021927 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:39:58.021938 | orchestrator | 2025-09-19 00:39:58.021949 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 00:39:58.021959 | orchestrator | Friday 19 September 2025 00:39:57 +0000 (0:00:00.144) 0:00:49.824 ****** 2025-09-19 00:39:58.021970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:39:58.021981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:39:58.022002 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:40:03.854827 | orchestrator | 2025-09-19 00:40:03.854939 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 00:40:03.854954 | orchestrator | Friday 19 September 2025 00:39:58 +0000 (0:00:00.143) 0:00:49.968 ****** 2025-09-19 00:40:03.854966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'})  2025-09-19 00:40:03.854978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'})  2025-09-19 00:40:03.854988 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:40:03.854999 | orchestrator | 2025-09-19 00:40:03.855008 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 00:40:03.855018 | orchestrator | Friday 19 September 2025 00:39:58 +0000 (0:00:00.138) 0:00:50.106 ****** 2025-09-19 00:40:03.855049 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 00:40:03.855059 | orchestrator |  "lvm_report": { 2025-09-19 00:40:03.855071 | orchestrator |  "lv": [ 2025-09-19 00:40:03.855081 | orchestrator |  { 2025-09-19 00:40:03.855091 | orchestrator |  "lv_name": "osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b", 2025-09-19 00:40:03.855102 | orchestrator |  "vg_name": "ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b" 2025-09-19 00:40:03.855159 | orchestrator |  }, 2025-09-19 00:40:03.855170 | orchestrator |  { 2025-09-19 00:40:03.855180 | orchestrator |  "lv_name": "osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110", 2025-09-19 00:40:03.855189 | orchestrator |  "vg_name": "ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110" 2025-09-19 00:40:03.855199 | orchestrator |  } 2025-09-19 00:40:03.855208 | orchestrator |  ], 2025-09-19 00:40:03.855218 | orchestrator |  "pv": [ 2025-09-19 00:40:03.855227 | orchestrator |  { 2025-09-19 00:40:03.855237 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 00:40:03.855246 | orchestrator |  "vg_name": "ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110" 2025-09-19 00:40:03.855256 | orchestrator |  }, 2025-09-19 00:40:03.855265 | orchestrator |  { 2025-09-19 00:40:03.855274 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 00:40:03.855284 | orchestrator |  "vg_name": "ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b" 2025-09-19 00:40:03.855293 | orchestrator |  } 2025-09-19 00:40:03.855303 | orchestrator |  ] 2025-09-19 00:40:03.855312 | orchestrator |  } 2025-09-19 00:40:03.855322 | orchestrator | } 2025-09-19 00:40:03.855332 | orchestrator | 2025-09-19 00:40:03.855342 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 00:40:03.855354 | orchestrator | 2025-09-19 00:40:03.855365 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 00:40:03.855376 | orchestrator | Friday 19 September 2025 00:39:58 +0000 (0:00:00.416) 0:00:50.523 ****** 2025-09-19 00:40:03.855387 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 00:40:03.855398 | orchestrator | 2025-09-19 00:40:03.855410 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 00:40:03.855421 | orchestrator | Friday 19 September 2025 00:39:58 +0000 (0:00:00.232) 0:00:50.756 ****** 2025-09-19 00:40:03.855432 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:03.855443 | orchestrator | 2025-09-19 00:40:03.855455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855467 | orchestrator | Friday 19 September 2025 00:39:58 +0000 (0:00:00.194) 0:00:50.950 ****** 2025-09-19 00:40:03.855478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 00:40:03.855489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 00:40:03.855500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 00:40:03.855511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 00:40:03.855523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 00:40:03.855534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 00:40:03.855545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 00:40:03.855556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 00:40:03.855567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 00:40:03.855579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 00:40:03.855590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 00:40:03.855609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 00:40:03.855620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 00:40:03.855631 | orchestrator | 2025-09-19 00:40:03.855643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855654 | orchestrator | Friday 19 September 2025 00:39:59 +0000 (0:00:00.379) 0:00:51.330 ****** 2025-09-19 00:40:03.855665 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855675 | orchestrator | 2025-09-19 00:40:03.855691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855702 | orchestrator | Friday 19 September 2025 00:39:59 +0000 (0:00:00.197) 0:00:51.528 ****** 2025-09-19 00:40:03.855714 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855723 | orchestrator | 2025-09-19 00:40:03.855733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855760 | orchestrator | Friday 19 September 2025 00:39:59 +0000 (0:00:00.203) 0:00:51.731 ****** 2025-09-19 00:40:03.855771 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855780 | orchestrator | 2025-09-19 00:40:03.855790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855799 | orchestrator | Friday 19 September 2025 00:39:59 +0000 (0:00:00.198) 0:00:51.930 ****** 2025-09-19 00:40:03.855809 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855818 | orchestrator | 2025-09-19 00:40:03.855828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855838 | orchestrator | Friday 19 September 2025 00:40:00 +0000 (0:00:00.186) 0:00:52.117 ****** 2025-09-19 00:40:03.855892 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855903 | orchestrator | 2025-09-19 00:40:03.855912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855922 | orchestrator | Friday 19 September 2025 00:40:00 +0000 (0:00:00.216) 0:00:52.333 ****** 2025-09-19 00:40:03.855932 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855941 | orchestrator | 2025-09-19 00:40:03.855951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855960 | orchestrator | Friday 19 September 2025 00:40:00 +0000 (0:00:00.530) 0:00:52.864 ****** 2025-09-19 00:40:03.855970 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.855979 | orchestrator | 2025-09-19 00:40:03.855989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.855998 | orchestrator | Friday 19 September 2025 00:40:01 +0000 (0:00:00.202) 0:00:53.067 ****** 2025-09-19 00:40:03.856008 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:03.856017 | orchestrator | 2025-09-19 00:40:03.856027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.856036 | orchestrator | Friday 19 September 2025 00:40:01 +0000 (0:00:00.217) 0:00:53.284 ****** 2025-09-19 00:40:03.856045 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3) 2025-09-19 00:40:03.856056 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3) 2025-09-19 00:40:03.856066 | orchestrator | 2025-09-19 00:40:03.856076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.856085 | orchestrator | Friday 19 September 2025 00:40:01 +0000 (0:00:00.419) 0:00:53.704 ****** 2025-09-19 00:40:03.856095 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4) 2025-09-19 00:40:03.856104 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4) 2025-09-19 00:40:03.856143 | orchestrator | 2025-09-19 00:40:03.856153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.856163 | orchestrator | Friday 19 September 2025 00:40:02 +0000 (0:00:00.416) 0:00:54.121 ****** 2025-09-19 00:40:03.856177 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd) 2025-09-19 00:40:03.856193 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd) 2025-09-19 00:40:03.856203 | orchestrator | 2025-09-19 00:40:03.856213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.856222 | orchestrator | Friday 19 September 2025 00:40:02 +0000 (0:00:00.405) 0:00:54.526 ****** 2025-09-19 00:40:03.856232 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576) 2025-09-19 00:40:03.856241 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576) 2025-09-19 00:40:03.856251 | orchestrator | 2025-09-19 00:40:03.856260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 00:40:03.856269 | orchestrator | Friday 19 September 2025 00:40:03 +0000 (0:00:00.457) 0:00:54.984 ****** 2025-09-19 00:40:03.856279 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 00:40:03.856288 | orchestrator | 2025-09-19 00:40:03.856298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:03.856307 | orchestrator | Friday 19 September 2025 00:40:03 +0000 (0:00:00.365) 0:00:55.350 ****** 2025-09-19 00:40:03.856316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 00:40:03.856326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 00:40:03.856335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 00:40:03.856344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 00:40:03.856354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 00:40:03.856363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 00:40:03.856373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 00:40:03.856382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 00:40:03.856391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 00:40:03.856401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 00:40:03.856410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 00:40:03.856427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 00:40:13.041062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 00:40:13.041191 | orchestrator | 2025-09-19 00:40:13.041208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041216 | orchestrator | Friday 19 September 2025 00:40:03 +0000 (0:00:00.446) 0:00:55.796 ****** 2025-09-19 00:40:13.041224 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041232 | orchestrator | 2025-09-19 00:40:13.041240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041247 | orchestrator | Friday 19 September 2025 00:40:04 +0000 (0:00:00.194) 0:00:55.991 ****** 2025-09-19 00:40:13.041254 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041261 | orchestrator | 2025-09-19 00:40:13.041269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041276 | orchestrator | Friday 19 September 2025 00:40:04 +0000 (0:00:00.204) 0:00:56.196 ****** 2025-09-19 00:40:13.041284 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041291 | orchestrator | 2025-09-19 00:40:13.041298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041305 | orchestrator | Friday 19 September 2025 00:40:04 +0000 (0:00:00.607) 0:00:56.804 ****** 2025-09-19 00:40:13.041338 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041345 | orchestrator | 2025-09-19 00:40:13.041352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041360 | orchestrator | Friday 19 September 2025 00:40:05 +0000 (0:00:00.208) 0:00:57.012 ****** 2025-09-19 00:40:13.041367 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041374 | orchestrator | 2025-09-19 00:40:13.041381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041388 | orchestrator | Friday 19 September 2025 00:40:05 +0000 (0:00:00.216) 0:00:57.228 ****** 2025-09-19 00:40:13.041395 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041402 | orchestrator | 2025-09-19 00:40:13.041410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041417 | orchestrator | Friday 19 September 2025 00:40:05 +0000 (0:00:00.207) 0:00:57.436 ****** 2025-09-19 00:40:13.041424 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041431 | orchestrator | 2025-09-19 00:40:13.041438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041445 | orchestrator | Friday 19 September 2025 00:40:05 +0000 (0:00:00.216) 0:00:57.653 ****** 2025-09-19 00:40:13.041452 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041459 | orchestrator | 2025-09-19 00:40:13.041467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041474 | orchestrator | Friday 19 September 2025 00:40:05 +0000 (0:00:00.195) 0:00:57.849 ****** 2025-09-19 00:40:13.041481 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 00:40:13.041489 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 00:40:13.041496 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 00:40:13.041519 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 00:40:13.041527 | orchestrator | 2025-09-19 00:40:13.041534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041541 | orchestrator | Friday 19 September 2025 00:40:06 +0000 (0:00:00.638) 0:00:58.487 ****** 2025-09-19 00:40:13.041548 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041554 | orchestrator | 2025-09-19 00:40:13.041560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041566 | orchestrator | Friday 19 September 2025 00:40:06 +0000 (0:00:00.221) 0:00:58.708 ****** 2025-09-19 00:40:13.041572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041578 | orchestrator | 2025-09-19 00:40:13.041584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041590 | orchestrator | Friday 19 September 2025 00:40:06 +0000 (0:00:00.226) 0:00:58.935 ****** 2025-09-19 00:40:13.041597 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041604 | orchestrator | 2025-09-19 00:40:13.041611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 00:40:13.041618 | orchestrator | Friday 19 September 2025 00:40:07 +0000 (0:00:00.204) 0:00:59.139 ****** 2025-09-19 00:40:13.041625 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041632 | orchestrator | 2025-09-19 00:40:13.041638 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 00:40:13.041645 | orchestrator | Friday 19 September 2025 00:40:07 +0000 (0:00:00.215) 0:00:59.355 ****** 2025-09-19 00:40:13.041651 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041658 | orchestrator | 2025-09-19 00:40:13.041665 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 00:40:13.041672 | orchestrator | Friday 19 September 2025 00:40:07 +0000 (0:00:00.379) 0:00:59.734 ****** 2025-09-19 00:40:13.041679 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c5ae36c-b075-5e22-9b23-69e08de6e546'}}) 2025-09-19 00:40:13.041686 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}}) 2025-09-19 00:40:13.041703 | orchestrator | 2025-09-19 00:40:13.041711 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 00:40:13.041718 | orchestrator | Friday 19 September 2025 00:40:08 +0000 (0:00:00.221) 0:00:59.956 ****** 2025-09-19 00:40:13.041726 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'}) 2025-09-19 00:40:13.041736 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}) 2025-09-19 00:40:13.041743 | orchestrator | 2025-09-19 00:40:13.041751 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 00:40:13.041777 | orchestrator | Friday 19 September 2025 00:40:09 +0000 (0:00:01.858) 0:01:01.815 ****** 2025-09-19 00:40:13.041786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:13.041795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:13.041802 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041809 | orchestrator | 2025-09-19 00:40:13.041817 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 00:40:13.041824 | orchestrator | Friday 19 September 2025 00:40:10 +0000 (0:00:00.183) 0:01:01.999 ****** 2025-09-19 00:40:13.041832 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'}) 2025-09-19 00:40:13.041840 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}) 2025-09-19 00:40:13.041848 | orchestrator | 2025-09-19 00:40:13.041855 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 00:40:13.041863 | orchestrator | Friday 19 September 2025 00:40:11 +0000 (0:00:01.374) 0:01:03.374 ****** 2025-09-19 00:40:13.041870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:13.041878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:13.041885 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041893 | orchestrator | 2025-09-19 00:40:13.041900 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 00:40:13.041907 | orchestrator | Friday 19 September 2025 00:40:11 +0000 (0:00:00.171) 0:01:03.545 ****** 2025-09-19 00:40:13.041915 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041922 | orchestrator | 2025-09-19 00:40:13.041929 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 00:40:13.041936 | orchestrator | Friday 19 September 2025 00:40:11 +0000 (0:00:00.127) 0:01:03.672 ****** 2025-09-19 00:40:13.041945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:13.041959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:13.041967 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.041975 | orchestrator | 2025-09-19 00:40:13.041982 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 00:40:13.041988 | orchestrator | Friday 19 September 2025 00:40:11 +0000 (0:00:00.176) 0:01:03.849 ****** 2025-09-19 00:40:13.041994 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.042001 | orchestrator | 2025-09-19 00:40:13.042007 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 00:40:13.042072 | orchestrator | Friday 19 September 2025 00:40:12 +0000 (0:00:00.154) 0:01:04.004 ****** 2025-09-19 00:40:13.042080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:13.042086 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:13.042093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.042099 | orchestrator | 2025-09-19 00:40:13.042124 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 00:40:13.042130 | orchestrator | Friday 19 September 2025 00:40:12 +0000 (0:00:00.172) 0:01:04.177 ****** 2025-09-19 00:40:13.042136 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.042142 | orchestrator | 2025-09-19 00:40:13.042148 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 00:40:13.042154 | orchestrator | Friday 19 September 2025 00:40:12 +0000 (0:00:00.164) 0:01:04.341 ****** 2025-09-19 00:40:13.042185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:13.042192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:13.042198 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:13.042203 | orchestrator | 2025-09-19 00:40:13.042209 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 00:40:13.042215 | orchestrator | Friday 19 September 2025 00:40:12 +0000 (0:00:00.147) 0:01:04.489 ****** 2025-09-19 00:40:13.042221 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:13.042228 | orchestrator | 2025-09-19 00:40:13.042234 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 00:40:13.042240 | orchestrator | Friday 19 September 2025 00:40:12 +0000 (0:00:00.141) 0:01:04.630 ****** 2025-09-19 00:40:13.042257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:19.264941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:19.265040 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265054 | orchestrator | 2025-09-19 00:40:19.265067 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 00:40:19.265080 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.358) 0:01:04.988 ****** 2025-09-19 00:40:19.265091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:19.265140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:19.265152 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265163 | orchestrator | 2025-09-19 00:40:19.265174 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 00:40:19.265186 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.151) 0:01:05.140 ****** 2025-09-19 00:40:19.265197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:19.265208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:19.265218 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265229 | orchestrator | 2025-09-19 00:40:19.265269 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 00:40:19.265281 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.179) 0:01:05.320 ****** 2025-09-19 00:40:19.265291 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265302 | orchestrator | 2025-09-19 00:40:19.265312 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 00:40:19.265323 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.133) 0:01:05.453 ****** 2025-09-19 00:40:19.265334 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265344 | orchestrator | 2025-09-19 00:40:19.265355 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 00:40:19.265366 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.137) 0:01:05.591 ****** 2025-09-19 00:40:19.265376 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265387 | orchestrator | 2025-09-19 00:40:19.265398 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 00:40:19.265408 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.148) 0:01:05.739 ****** 2025-09-19 00:40:19.265419 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 00:40:19.265430 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 00:40:19.265441 | orchestrator | } 2025-09-19 00:40:19.265452 | orchestrator | 2025-09-19 00:40:19.265463 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 00:40:19.265475 | orchestrator | Friday 19 September 2025 00:40:13 +0000 (0:00:00.155) 0:01:05.895 ****** 2025-09-19 00:40:19.265487 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 00:40:19.265499 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 00:40:19.265511 | orchestrator | } 2025-09-19 00:40:19.265523 | orchestrator | 2025-09-19 00:40:19.265535 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 00:40:19.265547 | orchestrator | Friday 19 September 2025 00:40:14 +0000 (0:00:00.142) 0:01:06.037 ****** 2025-09-19 00:40:19.265560 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 00:40:19.265572 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 00:40:19.265585 | orchestrator | } 2025-09-19 00:40:19.265598 | orchestrator | 2025-09-19 00:40:19.265610 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 00:40:19.265622 | orchestrator | Friday 19 September 2025 00:40:14 +0000 (0:00:00.156) 0:01:06.194 ****** 2025-09-19 00:40:19.265635 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:19.265647 | orchestrator | 2025-09-19 00:40:19.265659 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 00:40:19.265671 | orchestrator | Friday 19 September 2025 00:40:14 +0000 (0:00:00.495) 0:01:06.689 ****** 2025-09-19 00:40:19.265684 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:19.265696 | orchestrator | 2025-09-19 00:40:19.265708 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 00:40:19.265720 | orchestrator | Friday 19 September 2025 00:40:15 +0000 (0:00:00.555) 0:01:07.244 ****** 2025-09-19 00:40:19.265732 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:19.265744 | orchestrator | 2025-09-19 00:40:19.265757 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 00:40:19.265769 | orchestrator | Friday 19 September 2025 00:40:15 +0000 (0:00:00.528) 0:01:07.773 ****** 2025-09-19 00:40:19.265781 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:19.265792 | orchestrator | 2025-09-19 00:40:19.265804 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 00:40:19.265817 | orchestrator | Friday 19 September 2025 00:40:16 +0000 (0:00:00.372) 0:01:08.145 ****** 2025-09-19 00:40:19.265829 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265841 | orchestrator | 2025-09-19 00:40:19.265852 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 00:40:19.265862 | orchestrator | Friday 19 September 2025 00:40:16 +0000 (0:00:00.127) 0:01:08.272 ****** 2025-09-19 00:40:19.265873 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.265890 | orchestrator | 2025-09-19 00:40:19.265901 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 00:40:19.265932 | orchestrator | Friday 19 September 2025 00:40:16 +0000 (0:00:00.113) 0:01:08.386 ****** 2025-09-19 00:40:19.265944 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 00:40:19.265955 | orchestrator |  "vgs_report": { 2025-09-19 00:40:19.265967 | orchestrator |  "vg": [] 2025-09-19 00:40:19.265994 | orchestrator |  } 2025-09-19 00:40:19.266006 | orchestrator | } 2025-09-19 00:40:19.266070 | orchestrator | 2025-09-19 00:40:19.266082 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 00:40:19.266093 | orchestrator | Friday 19 September 2025 00:40:16 +0000 (0:00:00.167) 0:01:08.554 ****** 2025-09-19 00:40:19.266122 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266133 | orchestrator | 2025-09-19 00:40:19.266143 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 00:40:19.266154 | orchestrator | Friday 19 September 2025 00:40:16 +0000 (0:00:00.142) 0:01:08.697 ****** 2025-09-19 00:40:19.266165 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266175 | orchestrator | 2025-09-19 00:40:19.266186 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 00:40:19.266197 | orchestrator | Friday 19 September 2025 00:40:16 +0000 (0:00:00.139) 0:01:08.836 ****** 2025-09-19 00:40:19.266207 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266218 | orchestrator | 2025-09-19 00:40:19.266228 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 00:40:19.266239 | orchestrator | Friday 19 September 2025 00:40:17 +0000 (0:00:00.146) 0:01:08.983 ****** 2025-09-19 00:40:19.266250 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266260 | orchestrator | 2025-09-19 00:40:19.266271 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 00:40:19.266282 | orchestrator | Friday 19 September 2025 00:40:17 +0000 (0:00:00.139) 0:01:09.122 ****** 2025-09-19 00:40:19.266292 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266303 | orchestrator | 2025-09-19 00:40:19.266314 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 00:40:19.266324 | orchestrator | Friday 19 September 2025 00:40:17 +0000 (0:00:00.154) 0:01:09.277 ****** 2025-09-19 00:40:19.266335 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266345 | orchestrator | 2025-09-19 00:40:19.266356 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 00:40:19.266367 | orchestrator | Friday 19 September 2025 00:40:17 +0000 (0:00:00.133) 0:01:09.411 ****** 2025-09-19 00:40:19.266377 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266388 | orchestrator | 2025-09-19 00:40:19.266399 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 00:40:19.266409 | orchestrator | Friday 19 September 2025 00:40:17 +0000 (0:00:00.141) 0:01:09.553 ****** 2025-09-19 00:40:19.266420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266431 | orchestrator | 2025-09-19 00:40:19.266441 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 00:40:19.266452 | orchestrator | Friday 19 September 2025 00:40:17 +0000 (0:00:00.147) 0:01:09.700 ****** 2025-09-19 00:40:19.266462 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266473 | orchestrator | 2025-09-19 00:40:19.266484 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 00:40:19.266494 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.330) 0:01:10.030 ****** 2025-09-19 00:40:19.266518 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266529 | orchestrator | 2025-09-19 00:40:19.266540 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 00:40:19.266551 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.138) 0:01:10.169 ****** 2025-09-19 00:40:19.266561 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266572 | orchestrator | 2025-09-19 00:40:19.266583 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 00:40:19.266601 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.146) 0:01:10.315 ****** 2025-09-19 00:40:19.266612 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266622 | orchestrator | 2025-09-19 00:40:19.266633 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 00:40:19.266644 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.137) 0:01:10.452 ****** 2025-09-19 00:40:19.266654 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266665 | orchestrator | 2025-09-19 00:40:19.266676 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 00:40:19.266686 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.145) 0:01:10.598 ****** 2025-09-19 00:40:19.266697 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266708 | orchestrator | 2025-09-19 00:40:19.266718 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 00:40:19.266729 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.141) 0:01:10.740 ****** 2025-09-19 00:40:19.266739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:19.266750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:19.266761 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266772 | orchestrator | 2025-09-19 00:40:19.266782 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 00:40:19.266793 | orchestrator | Friday 19 September 2025 00:40:18 +0000 (0:00:00.154) 0:01:10.895 ****** 2025-09-19 00:40:19.266804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:19.266814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:19.266825 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:19.266836 | orchestrator | 2025-09-19 00:40:19.266846 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 00:40:19.266857 | orchestrator | Friday 19 September 2025 00:40:19 +0000 (0:00:00.162) 0:01:11.057 ****** 2025-09-19 00:40:19.266875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.297453 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.297562 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.297578 | orchestrator | 2025-09-19 00:40:22.297591 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 00:40:22.297604 | orchestrator | Friday 19 September 2025 00:40:19 +0000 (0:00:00.153) 0:01:11.211 ****** 2025-09-19 00:40:22.297615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.297626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.297637 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.297648 | orchestrator | 2025-09-19 00:40:22.297659 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 00:40:22.297669 | orchestrator | Friday 19 September 2025 00:40:19 +0000 (0:00:00.148) 0:01:11.360 ****** 2025-09-19 00:40:22.297680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.297715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.297727 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.297738 | orchestrator | 2025-09-19 00:40:22.297748 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 00:40:22.297759 | orchestrator | Friday 19 September 2025 00:40:19 +0000 (0:00:00.157) 0:01:11.517 ****** 2025-09-19 00:40:22.297770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.297781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.297791 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.297802 | orchestrator | 2025-09-19 00:40:22.297812 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 00:40:22.297838 | orchestrator | Friday 19 September 2025 00:40:19 +0000 (0:00:00.151) 0:01:11.669 ****** 2025-09-19 00:40:22.297850 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.297861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.297871 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.297882 | orchestrator | 2025-09-19 00:40:22.297893 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 00:40:22.297903 | orchestrator | Friday 19 September 2025 00:40:20 +0000 (0:00:00.358) 0:01:12.027 ****** 2025-09-19 00:40:22.297915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.297926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.297937 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.297947 | orchestrator | 2025-09-19 00:40:22.297960 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 00:40:22.297972 | orchestrator | Friday 19 September 2025 00:40:20 +0000 (0:00:00.164) 0:01:12.192 ****** 2025-09-19 00:40:22.297984 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:22.297997 | orchestrator | 2025-09-19 00:40:22.298009 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 00:40:22.298079 | orchestrator | Friday 19 September 2025 00:40:20 +0000 (0:00:00.558) 0:01:12.750 ****** 2025-09-19 00:40:22.298092 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:22.298125 | orchestrator | 2025-09-19 00:40:22.298138 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 00:40:22.298151 | orchestrator | Friday 19 September 2025 00:40:21 +0000 (0:00:00.517) 0:01:13.267 ****** 2025-09-19 00:40:22.298162 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:22.298174 | orchestrator | 2025-09-19 00:40:22.298186 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 00:40:22.298198 | orchestrator | Friday 19 September 2025 00:40:21 +0000 (0:00:00.143) 0:01:13.411 ****** 2025-09-19 00:40:22.298211 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'vg_name': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}) 2025-09-19 00:40:22.298225 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'vg_name': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'}) 2025-09-19 00:40:22.298238 | orchestrator | 2025-09-19 00:40:22.298250 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 00:40:22.298270 | orchestrator | Friday 19 September 2025 00:40:21 +0000 (0:00:00.189) 0:01:13.600 ****** 2025-09-19 00:40:22.298301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.298314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.298325 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.298336 | orchestrator | 2025-09-19 00:40:22.298347 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 00:40:22.298358 | orchestrator | Friday 19 September 2025 00:40:21 +0000 (0:00:00.151) 0:01:13.752 ****** 2025-09-19 00:40:22.298368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.298379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.298390 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.298402 | orchestrator | 2025-09-19 00:40:22.298413 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 00:40:22.298423 | orchestrator | Friday 19 September 2025 00:40:21 +0000 (0:00:00.164) 0:01:13.917 ****** 2025-09-19 00:40:22.298434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'})  2025-09-19 00:40:22.298445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'})  2025-09-19 00:40:22.298456 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:22.298467 | orchestrator | 2025-09-19 00:40:22.298478 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 00:40:22.298488 | orchestrator | Friday 19 September 2025 00:40:22 +0000 (0:00:00.154) 0:01:14.071 ****** 2025-09-19 00:40:22.298499 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 00:40:22.298510 | orchestrator |  "lvm_report": { 2025-09-19 00:40:22.298522 | orchestrator |  "lv": [ 2025-09-19 00:40:22.298533 | orchestrator |  { 2025-09-19 00:40:22.298544 | orchestrator |  "lv_name": "osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd", 2025-09-19 00:40:22.298556 | orchestrator |  "vg_name": "ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd" 2025-09-19 00:40:22.298566 | orchestrator |  }, 2025-09-19 00:40:22.298582 | orchestrator |  { 2025-09-19 00:40:22.298593 | orchestrator |  "lv_name": "osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546", 2025-09-19 00:40:22.298604 | orchestrator |  "vg_name": "ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546" 2025-09-19 00:40:22.298615 | orchestrator |  } 2025-09-19 00:40:22.298625 | orchestrator |  ], 2025-09-19 00:40:22.298636 | orchestrator |  "pv": [ 2025-09-19 00:40:22.298647 | orchestrator |  { 2025-09-19 00:40:22.298657 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 00:40:22.298668 | orchestrator |  "vg_name": "ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546" 2025-09-19 00:40:22.298679 | orchestrator |  }, 2025-09-19 00:40:22.298689 | orchestrator |  { 2025-09-19 00:40:22.298700 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 00:40:22.298711 | orchestrator |  "vg_name": "ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd" 2025-09-19 00:40:22.298722 | orchestrator |  } 2025-09-19 00:40:22.298732 | orchestrator |  ] 2025-09-19 00:40:22.298743 | orchestrator |  } 2025-09-19 00:40:22.298754 | orchestrator | } 2025-09-19 00:40:22.298765 | orchestrator | 2025-09-19 00:40:22.298776 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:40:22.298787 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 00:40:22.298806 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 00:40:22.298817 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 00:40:22.298828 | orchestrator | 2025-09-19 00:40:22.298838 | orchestrator | 2025-09-19 00:40:22.298849 | orchestrator | 2025-09-19 00:40:22.298860 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:40:22.298870 | orchestrator | Friday 19 September 2025 00:40:22 +0000 (0:00:00.146) 0:01:14.217 ****** 2025-09-19 00:40:22.298881 | orchestrator | =============================================================================== 2025-09-19 00:40:22.298892 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2025-09-19 00:40:22.298902 | orchestrator | Create block LVs -------------------------------------------------------- 4.33s 2025-09-19 00:40:22.298913 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.10s 2025-09-19 00:40:22.298924 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.63s 2025-09-19 00:40:22.298934 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.63s 2025-09-19 00:40:22.298945 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2025-09-19 00:40:22.298956 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2025-09-19 00:40:22.298966 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2025-09-19 00:40:22.298984 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2025-09-19 00:40:22.649754 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 1.07s 2025-09-19 00:40:22.649853 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-09-19 00:40:22.649867 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2025-09-19 00:40:22.649878 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-09-19 00:40:22.649889 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-09-19 00:40:22.649900 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.69s 2025-09-19 00:40:22.649910 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.68s 2025-09-19 00:40:22.649921 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.68s 2025-09-19 00:40:22.649931 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.67s 2025-09-19 00:40:22.649942 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.66s 2025-09-19 00:40:22.649952 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.66s 2025-09-19 00:40:34.794781 | orchestrator | 2025-09-19 00:40:34 | INFO  | Task d12f38ae-d977-4c3e-b6f1-1e760e305ccc (facts) was prepared for execution. 2025-09-19 00:40:34.794886 | orchestrator | 2025-09-19 00:40:34 | INFO  | It takes a moment until task d12f38ae-d977-4c3e-b6f1-1e760e305ccc (facts) has been started and output is visible here. 2025-09-19 00:40:46.497300 | orchestrator | 2025-09-19 00:40:46.497401 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 00:40:46.497417 | orchestrator | 2025-09-19 00:40:46.497429 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 00:40:46.497441 | orchestrator | Friday 19 September 2025 00:40:38 +0000 (0:00:00.265) 0:00:00.265 ****** 2025-09-19 00:40:46.497453 | orchestrator | ok: [testbed-manager] 2025-09-19 00:40:46.497465 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:40:46.497476 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:40:46.497514 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:40:46.497526 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:40:46.497536 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:40:46.497546 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:46.497557 | orchestrator | 2025-09-19 00:40:46.497568 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 00:40:46.497579 | orchestrator | Friday 19 September 2025 00:40:39 +0000 (0:00:00.927) 0:00:01.192 ****** 2025-09-19 00:40:46.497590 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:40:46.497601 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:40:46.497611 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:40:46.497623 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:40:46.497634 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:40:46.497645 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:40:46.497656 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:46.497666 | orchestrator | 2025-09-19 00:40:46.497677 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 00:40:46.497687 | orchestrator | 2025-09-19 00:40:46.497698 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 00:40:46.497709 | orchestrator | Friday 19 September 2025 00:40:40 +0000 (0:00:01.070) 0:00:02.263 ****** 2025-09-19 00:40:46.497720 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:40:46.497730 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:40:46.497741 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:40:46.497751 | orchestrator | ok: [testbed-manager] 2025-09-19 00:40:46.497762 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:40:46.497772 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:40:46.497783 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:40:46.497793 | orchestrator | 2025-09-19 00:40:46.497804 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 00:40:46.497814 | orchestrator | 2025-09-19 00:40:46.497825 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 00:40:46.497836 | orchestrator | Friday 19 September 2025 00:40:45 +0000 (0:00:04.814) 0:00:07.078 ****** 2025-09-19 00:40:46.497846 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:40:46.497858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:40:46.497870 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:40:46.497882 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:40:46.497894 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:40:46.497906 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:40:46.497918 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:40:46.497930 | orchestrator | 2025-09-19 00:40:46.497942 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:40:46.497955 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.497968 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.497980 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.497993 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.498006 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.498067 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.498101 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:40:46.498112 | orchestrator | 2025-09-19 00:40:46.498123 | orchestrator | 2025-09-19 00:40:46.498142 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:40:46.498153 | orchestrator | Friday 19 September 2025 00:40:46 +0000 (0:00:00.534) 0:00:07.612 ****** 2025-09-19 00:40:46.498164 | orchestrator | =============================================================================== 2025-09-19 00:40:46.498174 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2025-09-19 00:40:46.498185 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-09-19 00:40:46.498196 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.93s 2025-09-19 00:40:46.498207 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-19 00:40:58.727681 | orchestrator | 2025-09-19 00:40:58 | INFO  | Task e0b02b57-ee6f-45e2-8824-d9eda32b0414 (frr) was prepared for execution. 2025-09-19 00:40:58.727790 | orchestrator | 2025-09-19 00:40:58 | INFO  | It takes a moment until task e0b02b57-ee6f-45e2-8824-d9eda32b0414 (frr) has been started and output is visible here. 2025-09-19 00:41:25.176553 | orchestrator | 2025-09-19 00:41:25.176664 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-19 00:41:25.176682 | orchestrator | 2025-09-19 00:41:25.176694 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-19 00:41:25.176724 | orchestrator | Friday 19 September 2025 00:41:02 +0000 (0:00:00.238) 0:00:00.238 ****** 2025-09-19 00:41:25.176738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 00:41:25.176750 | orchestrator | 2025-09-19 00:41:25.176761 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-19 00:41:25.176772 | orchestrator | Friday 19 September 2025 00:41:02 +0000 (0:00:00.250) 0:00:00.489 ****** 2025-09-19 00:41:25.176784 | orchestrator | changed: [testbed-manager] 2025-09-19 00:41:25.176795 | orchestrator | 2025-09-19 00:41:25.176806 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-19 00:41:25.176818 | orchestrator | Friday 19 September 2025 00:41:04 +0000 (0:00:01.145) 0:00:01.634 ****** 2025-09-19 00:41:25.176828 | orchestrator | changed: [testbed-manager] 2025-09-19 00:41:25.176839 | orchestrator | 2025-09-19 00:41:25.176850 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-19 00:41:25.176868 | orchestrator | Friday 19 September 2025 00:41:13 +0000 (0:00:09.528) 0:00:11.163 ****** 2025-09-19 00:41:25.176879 | orchestrator | ok: [testbed-manager] 2025-09-19 00:41:25.176891 | orchestrator | 2025-09-19 00:41:25.176901 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-19 00:41:25.176912 | orchestrator | Friday 19 September 2025 00:41:14 +0000 (0:00:01.302) 0:00:12.465 ****** 2025-09-19 00:41:25.176923 | orchestrator | changed: [testbed-manager] 2025-09-19 00:41:25.176934 | orchestrator | 2025-09-19 00:41:25.176944 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-19 00:41:25.176955 | orchestrator | Friday 19 September 2025 00:41:15 +0000 (0:00:00.939) 0:00:13.405 ****** 2025-09-19 00:41:25.176966 | orchestrator | ok: [testbed-manager] 2025-09-19 00:41:25.176977 | orchestrator | 2025-09-19 00:41:25.176987 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-19 00:41:25.176999 | orchestrator | Friday 19 September 2025 00:41:17 +0000 (0:00:01.168) 0:00:14.573 ****** 2025-09-19 00:41:25.177010 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:41:25.177020 | orchestrator | 2025-09-19 00:41:25.177031 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-19 00:41:25.177042 | orchestrator | Friday 19 September 2025 00:41:17 +0000 (0:00:00.794) 0:00:15.368 ****** 2025-09-19 00:41:25.177093 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:41:25.177107 | orchestrator | 2025-09-19 00:41:25.177120 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-19 00:41:25.177133 | orchestrator | Friday 19 September 2025 00:41:18 +0000 (0:00:00.159) 0:00:15.528 ****** 2025-09-19 00:41:25.177170 | orchestrator | changed: [testbed-manager] 2025-09-19 00:41:25.177183 | orchestrator | 2025-09-19 00:41:25.177196 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-19 00:41:25.177208 | orchestrator | Friday 19 September 2025 00:41:19 +0000 (0:00:00.982) 0:00:16.511 ****** 2025-09-19 00:41:25.177220 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-19 00:41:25.177234 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-19 00:41:25.177247 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-19 00:41:25.177260 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-19 00:41:25.177272 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-19 00:41:25.177285 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-19 00:41:25.177297 | orchestrator | 2025-09-19 00:41:25.177308 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-19 00:41:25.177321 | orchestrator | Friday 19 September 2025 00:41:22 +0000 (0:00:03.159) 0:00:19.670 ****** 2025-09-19 00:41:25.177334 | orchestrator | ok: [testbed-manager] 2025-09-19 00:41:25.177346 | orchestrator | 2025-09-19 00:41:25.177358 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-19 00:41:25.177371 | orchestrator | Friday 19 September 2025 00:41:23 +0000 (0:00:01.344) 0:00:21.014 ****** 2025-09-19 00:41:25.177382 | orchestrator | changed: [testbed-manager] 2025-09-19 00:41:25.177394 | orchestrator | 2025-09-19 00:41:25.177406 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:41:25.177419 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:41:25.177432 | orchestrator | 2025-09-19 00:41:25.177444 | orchestrator | 2025-09-19 00:41:25.177455 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:41:25.177466 | orchestrator | Friday 19 September 2025 00:41:24 +0000 (0:00:01.427) 0:00:22.442 ****** 2025-09-19 00:41:25.177477 | orchestrator | =============================================================================== 2025-09-19 00:41:25.177488 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.53s 2025-09-19 00:41:25.177498 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.16s 2025-09-19 00:41:25.177509 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2025-09-19 00:41:25.177520 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.34s 2025-09-19 00:41:25.177547 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.30s 2025-09-19 00:41:25.177559 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2025-09-19 00:41:25.177569 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.15s 2025-09-19 00:41:25.177580 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.98s 2025-09-19 00:41:25.177591 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2025-09-19 00:41:25.177602 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.79s 2025-09-19 00:41:25.177613 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2025-09-19 00:41:25.177623 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-19 00:41:25.454514 | orchestrator | 2025-09-19 00:41:25.458347 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Sep 19 00:41:25 UTC 2025 2025-09-19 00:41:25.458417 | orchestrator | 2025-09-19 00:41:27.253288 | orchestrator | 2025-09-19 00:41:27 | INFO  | Collection nutshell is prepared for execution 2025-09-19 00:41:27.253415 | orchestrator | 2025-09-19 00:41:27 | INFO  | D [0] - dotfiles 2025-09-19 00:41:37.275475 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [0] - homer 2025-09-19 00:41:37.275605 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [0] - netdata 2025-09-19 00:41:37.275633 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [0] - openstackclient 2025-09-19 00:41:37.275652 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [0] - phpmyadmin 2025-09-19 00:41:37.275670 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [0] - common 2025-09-19 00:41:37.278978 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [1] -- loadbalancer 2025-09-19 00:41:37.279034 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [2] --- opensearch 2025-09-19 00:41:37.279955 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [2] --- mariadb-ng 2025-09-19 00:41:37.279980 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [3] ---- horizon 2025-09-19 00:41:37.279991 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [3] ---- keystone 2025-09-19 00:41:37.280001 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [4] ----- neutron 2025-09-19 00:41:37.280011 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [5] ------ wait-for-nova 2025-09-19 00:41:37.280237 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [5] ------ octavia 2025-09-19 00:41:37.281517 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [4] ----- barbican 2025-09-19 00:41:37.281678 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [4] ----- designate 2025-09-19 00:41:37.281696 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [4] ----- ironic 2025-09-19 00:41:37.281955 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [4] ----- placement 2025-09-19 00:41:37.282203 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [4] ----- magnum 2025-09-19 00:41:37.283802 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [1] -- openvswitch 2025-09-19 00:41:37.283831 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [2] --- ovn 2025-09-19 00:41:37.283842 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [1] -- memcached 2025-09-19 00:41:37.283853 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [1] -- redis 2025-09-19 00:41:37.283864 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [1] -- rabbitmq-ng 2025-09-19 00:41:37.283875 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [0] - kubernetes 2025-09-19 00:41:37.287944 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [1] -- kubeconfig 2025-09-19 00:41:37.287968 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [1] -- copy-kubeconfig 2025-09-19 00:41:37.287980 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [0] - ceph 2025-09-19 00:41:37.291476 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [1] -- ceph-pools 2025-09-19 00:41:37.291646 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [2] --- copy-ceph-keys 2025-09-19 00:41:37.291665 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [3] ---- cephclient 2025-09-19 00:41:37.291676 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-19 00:41:37.291687 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [4] ----- wait-for-keystone 2025-09-19 00:41:37.291708 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-19 00:41:37.291720 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [5] ------ glance 2025-09-19 00:41:37.291731 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [5] ------ cinder 2025-09-19 00:41:37.291742 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [5] ------ nova 2025-09-19 00:41:37.291990 | orchestrator | 2025-09-19 00:41:37 | INFO  | A [4] ----- prometheus 2025-09-19 00:41:37.292077 | orchestrator | 2025-09-19 00:41:37 | INFO  | D [5] ------ grafana 2025-09-19 00:41:37.474318 | orchestrator | 2025-09-19 00:41:37 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-19 00:41:37.474413 | orchestrator | 2025-09-19 00:41:37 | INFO  | Tasks are running in the background 2025-09-19 00:41:40.408243 | orchestrator | 2025-09-19 00:41:40 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-19 00:41:42.544754 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:41:42.545491 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:41:42.545686 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:41:42.546219 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:41:42.546823 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:41:42.550566 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:41:42.551026 | orchestrator | 2025-09-19 00:41:42 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:41:42.551155 | orchestrator | 2025-09-19 00:41:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:41:45.584981 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:41:45.585153 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:41:45.585999 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:41:45.588601 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:41:45.588889 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:41:45.589366 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:41:45.589798 | orchestrator | 2025-09-19 00:41:45 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:41:45.589897 | orchestrator | 2025-09-19 00:41:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:41:48.616311 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:41:48.616372 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:41:48.616819 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:41:48.617298 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:41:48.617925 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:41:48.618384 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:41:48.619107 | orchestrator | 2025-09-19 00:41:48 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:41:48.619124 | orchestrator | 2025-09-19 00:41:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:41:51.718425 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:41:51.718507 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:41:51.718522 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:41:51.719077 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:41:51.719824 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:41:51.764139 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:41:51.764192 | orchestrator | 2025-09-19 00:41:51 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:41:51.764201 | orchestrator | 2025-09-19 00:41:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:41:54.759312 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:41:54.762875 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:41:54.762897 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:41:54.763231 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:41:54.764640 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:41:54.766993 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:41:54.767688 | orchestrator | 2025-09-19 00:41:54 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:41:54.767701 | orchestrator | 2025-09-19 00:41:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:41:57.834345 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:41:57.836791 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:41:57.840048 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:41:57.845866 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:41:57.850560 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:41:57.852764 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:41:57.854114 | orchestrator | 2025-09-19 00:41:57 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:41:57.854142 | orchestrator | 2025-09-19 00:41:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:01.110774 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:01.112285 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state STARTED 2025-09-19 00:42:01.116194 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:01.118279 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:01.123146 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:01.128584 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:01.138874 | orchestrator | 2025-09-19 00:42:01 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:01.139276 | orchestrator | 2025-09-19 00:42:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:04.286782 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:04.286961 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task e20e6b92-bb32-4476-86a9-17fdca21af26 is in state SUCCESS 2025-09-19 00:42:04.286994 | orchestrator | 2025-09-19 00:42:04.287007 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-19 00:42:04.287043 | orchestrator | 2025-09-19 00:42:04.287055 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-19 00:42:04.287066 | orchestrator | Friday 19 September 2025 00:41:50 +0000 (0:00:00.680) 0:00:00.680 ****** 2025-09-19 00:42:04.287077 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:42:04.287089 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:42:04.287099 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:42:04.287110 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:42:04.287121 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:42:04.287131 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:04.287142 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:42:04.287152 | orchestrator | 2025-09-19 00:42:04.287163 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-19 00:42:04.287174 | orchestrator | Friday 19 September 2025 00:41:53 +0000 (0:00:03.144) 0:00:03.825 ****** 2025-09-19 00:42:04.287185 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 00:42:04.287196 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 00:42:04.287207 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 00:42:04.287217 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 00:42:04.287228 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 00:42:04.287239 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 00:42:04.287249 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 00:42:04.287260 | orchestrator | 2025-09-19 00:42:04.287271 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-19 00:42:04.287281 | orchestrator | Friday 19 September 2025 00:41:55 +0000 (0:00:02.551) 0:00:06.376 ****** 2025-09-19 00:42:04.287297 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:54.686691', 'end': '2025-09-19 00:41:54.694308', 'delta': '0:00:00.007617', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287320 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:54.883696', 'end': '2025-09-19 00:41:54.893870', 'delta': '0:00:00.010174', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287350 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:54.868493', 'end': '2025-09-19 00:41:54.872247', 'delta': '0:00:00.003754', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287381 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:55.076538', 'end': '2025-09-19 00:41:55.085100', 'delta': '0:00:00.008562', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287627 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:55.087265', 'end': '2025-09-19 00:41:55.096007', 'delta': '0:00:00.008742', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287644 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:55.131351', 'end': '2025-09-19 00:41:55.140262', 'delta': '0:00:00.008911', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287657 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 00:41:55.506619', 'end': '2025-09-19 00:41:55.516947', 'delta': '0:00:00.010328', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 00:42:04.287682 | orchestrator | 2025-09-19 00:42:04.287696 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-19 00:42:04.287708 | orchestrator | Friday 19 September 2025 00:41:57 +0000 (0:00:02.203) 0:00:08.580 ****** 2025-09-19 00:42:04.287721 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 00:42:04.287733 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 00:42:04.287746 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 00:42:04.287758 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 00:42:04.287771 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 00:42:04.287783 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 00:42:04.287795 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 00:42:04.287808 | orchestrator | 2025-09-19 00:42:04.287821 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-19 00:42:04.287835 | orchestrator | Friday 19 September 2025 00:42:00 +0000 (0:00:02.509) 0:00:11.089 ****** 2025-09-19 00:42:04.287852 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-19 00:42:04.287864 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 00:42:04.287877 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 00:42:04.287889 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 00:42:04.287902 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 00:42:04.287915 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 00:42:04.287926 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 00:42:04.287936 | orchestrator | 2025-09-19 00:42:04.287947 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:42:04.287965 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.287978 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.287989 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.288000 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.288010 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.288041 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.288053 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:04.288064 | orchestrator | 2025-09-19 00:42:04.288074 | orchestrator | 2025-09-19 00:42:04.288085 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:42:04.288096 | orchestrator | Friday 19 September 2025 00:42:03 +0000 (0:00:02.593) 0:00:13.682 ****** 2025-09-19 00:42:04.288106 | orchestrator | =============================================================================== 2025-09-19 00:42:04.288124 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.15s 2025-09-19 00:42:04.288135 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.59s 2025-09-19 00:42:04.288145 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.55s 2025-09-19 00:42:04.288156 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.51s 2025-09-19 00:42:04.288167 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.20s 2025-09-19 00:42:04.288178 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:04.288189 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:04.289786 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:04.290281 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:04.294914 | orchestrator | 2025-09-19 00:42:04 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:04.294954 | orchestrator | 2025-09-19 00:42:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:07.365633 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:07.366168 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:07.368886 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:07.368923 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:07.370173 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:07.371427 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:07.373668 | orchestrator | 2025-09-19 00:42:07 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:07.373691 | orchestrator | 2025-09-19 00:42:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:10.470321 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:10.470382 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:10.470402 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:10.470410 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:10.470417 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:10.470423 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:10.470430 | orchestrator | 2025-09-19 00:42:10 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:10.470437 | orchestrator | 2025-09-19 00:42:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:13.494309 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:13.495375 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:13.496754 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:13.497618 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:13.498667 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:13.501039 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:13.501818 | orchestrator | 2025-09-19 00:42:13 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:13.502124 | orchestrator | 2025-09-19 00:42:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:16.693256 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:16.695566 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:16.699499 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:16.701879 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:16.703534 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:16.705112 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:16.706753 | orchestrator | 2025-09-19 00:42:16 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:16.706822 | orchestrator | 2025-09-19 00:42:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:19.806684 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:19.810291 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:19.811550 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:19.815110 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:19.816858 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:19.817298 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:19.820354 | orchestrator | 2025-09-19 00:42:19 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:19.820378 | orchestrator | 2025-09-19 00:42:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:22.868928 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:22.869189 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:22.870574 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:22.871929 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:22.872882 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:22.873745 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state STARTED 2025-09-19 00:42:22.874751 | orchestrator | 2025-09-19 00:42:22 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:22.874809 | orchestrator | 2025-09-19 00:42:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:25.951389 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:26.267212 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:26.267263 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:26.267271 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:26.267276 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:26.267282 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task a3597c09-3dde-47ab-80bf-ddc434f76762 is in state SUCCESS 2025-09-19 00:42:26.267287 | orchestrator | 2025-09-19 00:42:25 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:26.267293 | orchestrator | 2025-09-19 00:42:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:29.009719 | orchestrator | 2025-09-19 00:42:29 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:29.011508 | orchestrator | 2025-09-19 00:42:29 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:29.012968 | orchestrator | 2025-09-19 00:42:29 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state STARTED 2025-09-19 00:42:29.014692 | orchestrator | 2025-09-19 00:42:29 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:29.015872 | orchestrator | 2025-09-19 00:42:29 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:29.017816 | orchestrator | 2025-09-19 00:42:29 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:29.018744 | orchestrator | 2025-09-19 00:42:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:32.094158 | orchestrator | 2025-09-19 00:42:32 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:32.097439 | orchestrator | 2025-09-19 00:42:32 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:32.097496 | orchestrator | 2025-09-19 00:42:32 | INFO  | Task cc35a74b-7c40-417a-b6ec-7188cfc87084 is in state SUCCESS 2025-09-19 00:42:32.097508 | orchestrator | 2025-09-19 00:42:32 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:32.097519 | orchestrator | 2025-09-19 00:42:32 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:32.097530 | orchestrator | 2025-09-19 00:42:32 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:32.097541 | orchestrator | 2025-09-19 00:42:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:35.128668 | orchestrator | 2025-09-19 00:42:35 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:35.129163 | orchestrator | 2025-09-19 00:42:35 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:35.129485 | orchestrator | 2025-09-19 00:42:35 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:35.130489 | orchestrator | 2025-09-19 00:42:35 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:35.131332 | orchestrator | 2025-09-19 00:42:35 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:35.131373 | orchestrator | 2025-09-19 00:42:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:38.163318 | orchestrator | 2025-09-19 00:42:38 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:38.164653 | orchestrator | 2025-09-19 00:42:38 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:38.165688 | orchestrator | 2025-09-19 00:42:38 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:38.166875 | orchestrator | 2025-09-19 00:42:38 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:38.168581 | orchestrator | 2025-09-19 00:42:38 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:38.168943 | orchestrator | 2025-09-19 00:42:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:41.205373 | orchestrator | 2025-09-19 00:42:41 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:41.218653 | orchestrator | 2025-09-19 00:42:41 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:41.218747 | orchestrator | 2025-09-19 00:42:41 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:41.218771 | orchestrator | 2025-09-19 00:42:41 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:41.230433 | orchestrator | 2025-09-19 00:42:41 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:41.230646 | orchestrator | 2025-09-19 00:42:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:44.306662 | orchestrator | 2025-09-19 00:42:44 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:44.306765 | orchestrator | 2025-09-19 00:42:44 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:44.306779 | orchestrator | 2025-09-19 00:42:44 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:44.306791 | orchestrator | 2025-09-19 00:42:44 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:44.306803 | orchestrator | 2025-09-19 00:42:44 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:44.306814 | orchestrator | 2025-09-19 00:42:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:47.373200 | orchestrator | 2025-09-19 00:42:47 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:47.375815 | orchestrator | 2025-09-19 00:42:47 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:47.379445 | orchestrator | 2025-09-19 00:42:47 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:47.380918 | orchestrator | 2025-09-19 00:42:47 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:47.383993 | orchestrator | 2025-09-19 00:42:47 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:47.384030 | orchestrator | 2025-09-19 00:42:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:50.464522 | orchestrator | 2025-09-19 00:42:50 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:50.464624 | orchestrator | 2025-09-19 00:42:50 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:50.464639 | orchestrator | 2025-09-19 00:42:50 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:50.464677 | orchestrator | 2025-09-19 00:42:50 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:50.464689 | orchestrator | 2025-09-19 00:42:50 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:50.464701 | orchestrator | 2025-09-19 00:42:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:53.527894 | orchestrator | 2025-09-19 00:42:53 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:53.528041 | orchestrator | 2025-09-19 00:42:53 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:53.528055 | orchestrator | 2025-09-19 00:42:53 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:53.528066 | orchestrator | 2025-09-19 00:42:53 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state STARTED 2025-09-19 00:42:53.528076 | orchestrator | 2025-09-19 00:42:53 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:53.528086 | orchestrator | 2025-09-19 00:42:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:56.554256 | orchestrator | 2025-09-19 00:42:56 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:56.555254 | orchestrator | 2025-09-19 00:42:56 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:56.555306 | orchestrator | 2025-09-19 00:42:56 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:56.555709 | orchestrator | 2025-09-19 00:42:56 | INFO  | Task ad1f6a9a-cd68-448d-a4e7-c488bb156896 is in state SUCCESS 2025-09-19 00:42:56.556144 | orchestrator | 2025-09-19 00:42:56.556183 | orchestrator | 2025-09-19 00:42:56.556202 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-19 00:42:56.556221 | orchestrator | 2025-09-19 00:42:56.556239 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-19 00:42:56.556257 | orchestrator | Friday 19 September 2025 00:41:49 +0000 (0:00:00.384) 0:00:00.384 ****** 2025-09-19 00:42:56.556277 | orchestrator | ok: [testbed-manager] => { 2025-09-19 00:42:56.556297 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-19 00:42:56.556318 | orchestrator | } 2025-09-19 00:42:56.556336 | orchestrator | 2025-09-19 00:42:56.556355 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-19 00:42:56.556374 | orchestrator | Friday 19 September 2025 00:41:49 +0000 (0:00:00.166) 0:00:00.550 ****** 2025-09-19 00:42:56.556392 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.556411 | orchestrator | 2025-09-19 00:42:56.556429 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-19 00:42:56.556447 | orchestrator | Friday 19 September 2025 00:41:50 +0000 (0:00:01.167) 0:00:01.718 ****** 2025-09-19 00:42:56.556592 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-19 00:42:56.556721 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-19 00:42:56.556749 | orchestrator | 2025-09-19 00:42:56.556767 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-19 00:42:56.556785 | orchestrator | Friday 19 September 2025 00:41:52 +0000 (0:00:01.820) 0:00:03.539 ****** 2025-09-19 00:42:56.556802 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.556819 | orchestrator | 2025-09-19 00:42:56.556837 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-19 00:42:56.556856 | orchestrator | Friday 19 September 2025 00:41:55 +0000 (0:00:02.861) 0:00:06.401 ****** 2025-09-19 00:42:56.556874 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.556893 | orchestrator | 2025-09-19 00:42:56.556910 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-19 00:42:56.556998 | orchestrator | Friday 19 September 2025 00:41:57 +0000 (0:00:01.909) 0:00:08.310 ****** 2025-09-19 00:42:56.557022 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-19 00:42:56.557043 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.557063 | orchestrator | 2025-09-19 00:42:56.557084 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-19 00:42:56.557104 | orchestrator | Friday 19 September 2025 00:42:21 +0000 (0:00:24.792) 0:00:33.103 ****** 2025-09-19 00:42:56.557124 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.557141 | orchestrator | 2025-09-19 00:42:56.557153 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:42:56.557164 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:56.557176 | orchestrator | 2025-09-19 00:42:56.557187 | orchestrator | 2025-09-19 00:42:56.557198 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:42:56.557208 | orchestrator | Friday 19 September 2025 00:42:23 +0000 (0:00:02.066) 0:00:35.170 ****** 2025-09-19 00:42:56.557219 | orchestrator | =============================================================================== 2025-09-19 00:42:56.557230 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.79s 2025-09-19 00:42:56.557240 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.86s 2025-09-19 00:42:56.557251 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.07s 2025-09-19 00:42:56.557262 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.91s 2025-09-19 00:42:56.557272 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.82s 2025-09-19 00:42:56.557283 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.17s 2025-09-19 00:42:56.557293 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.17s 2025-09-19 00:42:56.557304 | orchestrator | 2025-09-19 00:42:56.557315 | orchestrator | 2025-09-19 00:42:56.557326 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-19 00:42:56.557336 | orchestrator | 2025-09-19 00:42:56.557349 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-19 00:42:56.557362 | orchestrator | Friday 19 September 2025 00:41:48 +0000 (0:00:00.644) 0:00:00.644 ****** 2025-09-19 00:42:56.557375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-19 00:42:56.557389 | orchestrator | 2025-09-19 00:42:56.557401 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-19 00:42:56.557413 | orchestrator | Friday 19 September 2025 00:41:49 +0000 (0:00:00.382) 0:00:01.027 ****** 2025-09-19 00:42:56.557425 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-19 00:42:56.557437 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-19 00:42:56.557449 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-19 00:42:56.557462 | orchestrator | 2025-09-19 00:42:56.557473 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-19 00:42:56.557525 | orchestrator | Friday 19 September 2025 00:41:50 +0000 (0:00:01.745) 0:00:02.772 ****** 2025-09-19 00:42:56.557537 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.557547 | orchestrator | 2025-09-19 00:42:56.557558 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-19 00:42:56.557569 | orchestrator | Friday 19 September 2025 00:41:52 +0000 (0:00:02.007) 0:00:04.779 ****** 2025-09-19 00:42:56.557600 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-19 00:42:56.557612 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.557633 | orchestrator | 2025-09-19 00:42:56.557644 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-19 00:42:56.557653 | orchestrator | Friday 19 September 2025 00:42:25 +0000 (0:00:32.532) 0:00:37.312 ****** 2025-09-19 00:42:56.557662 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.557672 | orchestrator | 2025-09-19 00:42:56.557681 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-19 00:42:56.557691 | orchestrator | Friday 19 September 2025 00:42:27 +0000 (0:00:01.933) 0:00:39.246 ****** 2025-09-19 00:42:56.557700 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.557709 | orchestrator | 2025-09-19 00:42:56.557719 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-19 00:42:56.557728 | orchestrator | Friday 19 September 2025 00:42:27 +0000 (0:00:00.586) 0:00:39.832 ****** 2025-09-19 00:42:56.557738 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.557747 | orchestrator | 2025-09-19 00:42:56.557757 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-19 00:42:56.557766 | orchestrator | Friday 19 September 2025 00:42:29 +0000 (0:00:01.811) 0:00:41.644 ****** 2025-09-19 00:42:56.557776 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.557785 | orchestrator | 2025-09-19 00:42:56.557794 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-19 00:42:56.557804 | orchestrator | Friday 19 September 2025 00:42:30 +0000 (0:00:00.689) 0:00:42.333 ****** 2025-09-19 00:42:56.557813 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.557823 | orchestrator | 2025-09-19 00:42:56.557832 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-19 00:42:56.557842 | orchestrator | Friday 19 September 2025 00:42:31 +0000 (0:00:00.755) 0:00:43.089 ****** 2025-09-19 00:42:56.557851 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.557861 | orchestrator | 2025-09-19 00:42:56.557870 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:42:56.557879 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:56.557889 | orchestrator | 2025-09-19 00:42:56.557898 | orchestrator | 2025-09-19 00:42:56.557908 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:42:56.557917 | orchestrator | Friday 19 September 2025 00:42:31 +0000 (0:00:00.345) 0:00:43.434 ****** 2025-09-19 00:42:56.557953 | orchestrator | =============================================================================== 2025-09-19 00:42:56.557963 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.53s 2025-09-19 00:42:56.557973 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.01s 2025-09-19 00:42:56.557982 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.93s 2025-09-19 00:42:56.557991 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.81s 2025-09-19 00:42:56.558001 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.75s 2025-09-19 00:42:56.558010 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.76s 2025-09-19 00:42:56.558067 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.69s 2025-09-19 00:42:56.558078 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.59s 2025-09-19 00:42:56.558087 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.38s 2025-09-19 00:42:56.558097 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2025-09-19 00:42:56.558106 | orchestrator | 2025-09-19 00:42:56.558116 | orchestrator | 2025-09-19 00:42:56.558125 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-19 00:42:56.558134 | orchestrator | 2025-09-19 00:42:56.558144 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-19 00:42:56.558153 | orchestrator | Friday 19 September 2025 00:42:07 +0000 (0:00:00.182) 0:00:00.182 ****** 2025-09-19 00:42:56.558200 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.558210 | orchestrator | 2025-09-19 00:42:56.558220 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-19 00:42:56.558229 | orchestrator | Friday 19 September 2025 00:42:07 +0000 (0:00:00.654) 0:00:00.836 ****** 2025-09-19 00:42:56.558239 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-19 00:42:56.558248 | orchestrator | 2025-09-19 00:42:56.558258 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-19 00:42:56.558268 | orchestrator | Friday 19 September 2025 00:42:08 +0000 (0:00:00.466) 0:00:01.303 ****** 2025-09-19 00:42:56.558277 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.558287 | orchestrator | 2025-09-19 00:42:56.558296 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-19 00:42:56.558305 | orchestrator | Friday 19 September 2025 00:42:09 +0000 (0:00:01.032) 0:00:02.335 ****** 2025-09-19 00:42:56.558315 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-19 00:42:56.558324 | orchestrator | ok: [testbed-manager] 2025-09-19 00:42:56.558334 | orchestrator | 2025-09-19 00:42:56.558343 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-19 00:42:56.558353 | orchestrator | Friday 19 September 2025 00:42:48 +0000 (0:00:39.264) 0:00:41.600 ****** 2025-09-19 00:42:56.558362 | orchestrator | changed: [testbed-manager] 2025-09-19 00:42:56.558372 | orchestrator | 2025-09-19 00:42:56.558382 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:42:56.558391 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:42:56.558401 | orchestrator | 2025-09-19 00:42:56.558410 | orchestrator | 2025-09-19 00:42:56.558425 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:42:56.558442 | orchestrator | Friday 19 September 2025 00:42:53 +0000 (0:00:04.606) 0:00:46.206 ****** 2025-09-19 00:42:56.558452 | orchestrator | =============================================================================== 2025-09-19 00:42:56.558462 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 39.26s 2025-09-19 00:42:56.558471 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.61s 2025-09-19 00:42:56.558481 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.03s 2025-09-19 00:42:56.558490 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.65s 2025-09-19 00:42:56.558500 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.47s 2025-09-19 00:42:56.558509 | orchestrator | 2025-09-19 00:42:56 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:56.558519 | orchestrator | 2025-09-19 00:42:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:42:59.596595 | orchestrator | 2025-09-19 00:42:59 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:42:59.599679 | orchestrator | 2025-09-19 00:42:59 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:42:59.600517 | orchestrator | 2025-09-19 00:42:59 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:42:59.601580 | orchestrator | 2025-09-19 00:42:59 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:42:59.601604 | orchestrator | 2025-09-19 00:42:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:02.632363 | orchestrator | 2025-09-19 00:43:02 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:43:02.632844 | orchestrator | 2025-09-19 00:43:02 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:02.633713 | orchestrator | 2025-09-19 00:43:02 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:02.634635 | orchestrator | 2025-09-19 00:43:02 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:02.634670 | orchestrator | 2025-09-19 00:43:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:05.673626 | orchestrator | 2025-09-19 00:43:05 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state STARTED 2025-09-19 00:43:05.676612 | orchestrator | 2025-09-19 00:43:05 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:05.677206 | orchestrator | 2025-09-19 00:43:05 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:05.678451 | orchestrator | 2025-09-19 00:43:05 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:05.680264 | orchestrator | 2025-09-19 00:43:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:08.727585 | orchestrator | 2025-09-19 00:43:08.727695 | orchestrator | 2025-09-19 00:43:08.727712 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:43:08.727725 | orchestrator | 2025-09-19 00:43:08.727737 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:43:08.727749 | orchestrator | Friday 19 September 2025 00:41:50 +0000 (0:00:01.166) 0:00:01.166 ****** 2025-09-19 00:43:08.727760 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-19 00:43:08.727771 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-19 00:43:08.727782 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-19 00:43:08.727792 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-19 00:43:08.727803 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-19 00:43:08.727814 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-19 00:43:08.727824 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-19 00:43:08.727835 | orchestrator | 2025-09-19 00:43:08.727845 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-19 00:43:08.727856 | orchestrator | 2025-09-19 00:43:08.727867 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-19 00:43:08.727878 | orchestrator | Friday 19 September 2025 00:41:51 +0000 (0:00:01.000) 0:00:02.167 ****** 2025-09-19 00:43:08.727935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:43:08.727951 | orchestrator | 2025-09-19 00:43:08.727962 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-19 00:43:08.727972 | orchestrator | Friday 19 September 2025 00:41:53 +0000 (0:00:01.295) 0:00:03.462 ****** 2025-09-19 00:43:08.727983 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:43:08.727995 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:43:08.728006 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:43:08.728016 | orchestrator | ok: [testbed-manager] 2025-09-19 00:43:08.728027 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:43:08.728037 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:43:08.728059 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:43:08.728070 | orchestrator | 2025-09-19 00:43:08.728081 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-19 00:43:08.728092 | orchestrator | Friday 19 September 2025 00:41:55 +0000 (0:00:01.833) 0:00:05.295 ****** 2025-09-19 00:43:08.728103 | orchestrator | ok: [testbed-manager] 2025-09-19 00:43:08.728113 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:43:08.728124 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:43:08.728134 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:43:08.728145 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:43:08.728178 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:43:08.728189 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:43:08.728199 | orchestrator | 2025-09-19 00:43:08.728210 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-19 00:43:08.728221 | orchestrator | Friday 19 September 2025 00:41:58 +0000 (0:00:03.196) 0:00:08.492 ****** 2025-09-19 00:43:08.728232 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:08.728243 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:08.728253 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:08.728264 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:08.728274 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:08.728285 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:08.728295 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:08.728306 | orchestrator | 2025-09-19 00:43:08.728317 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-19 00:43:08.728327 | orchestrator | Friday 19 September 2025 00:42:00 +0000 (0:00:01.845) 0:00:10.338 ****** 2025-09-19 00:43:08.728338 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:08.728349 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:08.728360 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:08.728371 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:08.728381 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:08.728392 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:08.728402 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:08.728413 | orchestrator | 2025-09-19 00:43:08.728424 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-19 00:43:08.728434 | orchestrator | Friday 19 September 2025 00:42:12 +0000 (0:00:12.032) 0:00:22.371 ****** 2025-09-19 00:43:08.728445 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:08.728456 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:08.728466 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:08.728477 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:08.728487 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:08.728497 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:08.728508 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:08.728519 | orchestrator | 2025-09-19 00:43:08.728529 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-19 00:43:08.728540 | orchestrator | Friday 19 September 2025 00:42:47 +0000 (0:00:34.915) 0:00:57.287 ****** 2025-09-19 00:43:08.728551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:43:08.728564 | orchestrator | 2025-09-19 00:43:08.728575 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-19 00:43:08.728586 | orchestrator | Friday 19 September 2025 00:42:48 +0000 (0:00:01.587) 0:00:58.874 ****** 2025-09-19 00:43:08.728596 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-19 00:43:08.728607 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-19 00:43:08.728618 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-19 00:43:08.728629 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-19 00:43:08.728658 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-19 00:43:08.728669 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-19 00:43:08.728680 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-19 00:43:08.728690 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-19 00:43:08.728701 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-19 00:43:08.728711 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-19 00:43:08.728722 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-19 00:43:08.728733 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-19 00:43:08.728751 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-19 00:43:08.728762 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-19 00:43:08.728772 | orchestrator | 2025-09-19 00:43:08.728783 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-19 00:43:08.728796 | orchestrator | Friday 19 September 2025 00:42:54 +0000 (0:00:06.299) 0:01:05.174 ****** 2025-09-19 00:43:08.728807 | orchestrator | ok: [testbed-manager] 2025-09-19 00:43:08.728817 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:43:08.728828 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:43:08.728839 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:43:08.728849 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:43:08.728860 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:43:08.728870 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:43:08.728881 | orchestrator | 2025-09-19 00:43:08.728891 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-19 00:43:08.728923 | orchestrator | Friday 19 September 2025 00:42:56 +0000 (0:00:01.322) 0:01:06.497 ****** 2025-09-19 00:43:08.728935 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:08.728946 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:08.728956 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:08.728967 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:08.728977 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:08.728988 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:08.728998 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:08.729009 | orchestrator | 2025-09-19 00:43:08.729019 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-19 00:43:08.729030 | orchestrator | Friday 19 September 2025 00:42:57 +0000 (0:00:01.470) 0:01:07.967 ****** 2025-09-19 00:43:08.729046 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:43:08.729183 | orchestrator | ok: [testbed-manager] 2025-09-19 00:43:08.729200 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:43:08.729211 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:43:08.729221 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:43:08.729232 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:43:08.729243 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:43:08.729254 | orchestrator | 2025-09-19 00:43:08.729265 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-19 00:43:08.729276 | orchestrator | Friday 19 September 2025 00:42:59 +0000 (0:00:01.294) 0:01:09.261 ****** 2025-09-19 00:43:08.729287 | orchestrator | ok: [testbed-manager] 2025-09-19 00:43:08.729297 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:43:08.729308 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:43:08.729319 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:43:08.729329 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:43:08.729340 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:43:08.729350 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:43:08.729361 | orchestrator | 2025-09-19 00:43:08.729372 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-19 00:43:08.729382 | orchestrator | Friday 19 September 2025 00:43:01 +0000 (0:00:02.054) 0:01:11.316 ****** 2025-09-19 00:43:08.729394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-19 00:43:08.729406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:43:08.729418 | orchestrator | 2025-09-19 00:43:08.729429 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-19 00:43:08.729440 | orchestrator | Friday 19 September 2025 00:43:02 +0000 (0:00:01.445) 0:01:12.761 ****** 2025-09-19 00:43:08.729450 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:08.729461 | orchestrator | 2025-09-19 00:43:08.729472 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-19 00:43:08.729483 | orchestrator | Friday 19 September 2025 00:43:04 +0000 (0:00:02.089) 0:01:14.851 ****** 2025-09-19 00:43:08.729502 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:08.729513 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:08.729524 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:08.729534 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:08.729545 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:08.729555 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:08.729566 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:08.729576 | orchestrator | 2025-09-19 00:43:08.729587 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:43:08.729598 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729610 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729621 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729632 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729653 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729664 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729675 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:43:08.729685 | orchestrator | 2025-09-19 00:43:08.729696 | orchestrator | 2025-09-19 00:43:08.729707 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:43:08.729718 | orchestrator | Friday 19 September 2025 00:43:07 +0000 (0:00:03.159) 0:01:18.011 ****** 2025-09-19 00:43:08.729729 | orchestrator | =============================================================================== 2025-09-19 00:43:08.729739 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 34.92s 2025-09-19 00:43:08.729750 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.03s 2025-09-19 00:43:08.729761 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.30s 2025-09-19 00:43:08.729772 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.20s 2025-09-19 00:43:08.729782 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.16s 2025-09-19 00:43:08.729793 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.09s 2025-09-19 00:43:08.729804 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.05s 2025-09-19 00:43:08.729814 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.85s 2025-09-19 00:43:08.729825 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.83s 2025-09-19 00:43:08.729836 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.59s 2025-09-19 00:43:08.729846 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.47s 2025-09-19 00:43:08.729857 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2025-09-19 00:43:08.729874 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.32s 2025-09-19 00:43:08.729885 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.30s 2025-09-19 00:43:08.729895 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.29s 2025-09-19 00:43:08.729926 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-09-19 00:43:08.729945 | orchestrator | 2025-09-19 00:43:08 | INFO  | Task fcdeb286-c561-4c36-817f-f9b6ba43d26b is in state SUCCESS 2025-09-19 00:43:08.729956 | orchestrator | 2025-09-19 00:43:08 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:08.730226 | orchestrator | 2025-09-19 00:43:08 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:08.731704 | orchestrator | 2025-09-19 00:43:08 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:08.731726 | orchestrator | 2025-09-19 00:43:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:11.782736 | orchestrator | 2025-09-19 00:43:11 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:11.784716 | orchestrator | 2025-09-19 00:43:11 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:11.787317 | orchestrator | 2025-09-19 00:43:11 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:11.787415 | orchestrator | 2025-09-19 00:43:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:14.832260 | orchestrator | 2025-09-19 00:43:14 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:14.833469 | orchestrator | 2025-09-19 00:43:14 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:14.836042 | orchestrator | 2025-09-19 00:43:14 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:14.836077 | orchestrator | 2025-09-19 00:43:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:17.880091 | orchestrator | 2025-09-19 00:43:17 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:17.881519 | orchestrator | 2025-09-19 00:43:17 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:17.883135 | orchestrator | 2025-09-19 00:43:17 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:17.883750 | orchestrator | 2025-09-19 00:43:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:20.931712 | orchestrator | 2025-09-19 00:43:20 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:20.939386 | orchestrator | 2025-09-19 00:43:20 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:20.945226 | orchestrator | 2025-09-19 00:43:20 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:20.945794 | orchestrator | 2025-09-19 00:43:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:24.020486 | orchestrator | 2025-09-19 00:43:24 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:24.020570 | orchestrator | 2025-09-19 00:43:24 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:24.020583 | orchestrator | 2025-09-19 00:43:24 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:24.020591 | orchestrator | 2025-09-19 00:43:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:27.088955 | orchestrator | 2025-09-19 00:43:27 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:27.089041 | orchestrator | 2025-09-19 00:43:27 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:27.089052 | orchestrator | 2025-09-19 00:43:27 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:27.091313 | orchestrator | 2025-09-19 00:43:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:30.125789 | orchestrator | 2025-09-19 00:43:30 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:30.128155 | orchestrator | 2025-09-19 00:43:30 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:30.130176 | orchestrator | 2025-09-19 00:43:30 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:30.130220 | orchestrator | 2025-09-19 00:43:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:33.169879 | orchestrator | 2025-09-19 00:43:33 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:33.169962 | orchestrator | 2025-09-19 00:43:33 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:33.170593 | orchestrator | 2025-09-19 00:43:33 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:33.170612 | orchestrator | 2025-09-19 00:43:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:36.214629 | orchestrator | 2025-09-19 00:43:36 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:36.216001 | orchestrator | 2025-09-19 00:43:36 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:36.218218 | orchestrator | 2025-09-19 00:43:36 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:36.218968 | orchestrator | 2025-09-19 00:43:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:39.260733 | orchestrator | 2025-09-19 00:43:39 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:39.265483 | orchestrator | 2025-09-19 00:43:39 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:39.271169 | orchestrator | 2025-09-19 00:43:39 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:39.271219 | orchestrator | 2025-09-19 00:43:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:42.312373 | orchestrator | 2025-09-19 00:43:42 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:42.313185 | orchestrator | 2025-09-19 00:43:42 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:42.314284 | orchestrator | 2025-09-19 00:43:42 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:42.314319 | orchestrator | 2025-09-19 00:43:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:45.352442 | orchestrator | 2025-09-19 00:43:45 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:45.352570 | orchestrator | 2025-09-19 00:43:45 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:45.352594 | orchestrator | 2025-09-19 00:43:45 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:45.352612 | orchestrator | 2025-09-19 00:43:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:48.393437 | orchestrator | 2025-09-19 00:43:48 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:48.393537 | orchestrator | 2025-09-19 00:43:48 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:48.393551 | orchestrator | 2025-09-19 00:43:48 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:48.393563 | orchestrator | 2025-09-19 00:43:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:51.439596 | orchestrator | 2025-09-19 00:43:51 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:51.439727 | orchestrator | 2025-09-19 00:43:51 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:51.440241 | orchestrator | 2025-09-19 00:43:51 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:51.440268 | orchestrator | 2025-09-19 00:43:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:54.480399 | orchestrator | 2025-09-19 00:43:54 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state STARTED 2025-09-19 00:43:54.481060 | orchestrator | 2025-09-19 00:43:54 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:54.482118 | orchestrator | 2025-09-19 00:43:54 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:54.482144 | orchestrator | 2025-09-19 00:43:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:43:57.528323 | orchestrator | 2025-09-19 00:43:57 | INFO  | Task d3cd7d2c-9bfe-405c-bb1a-0f235087b38b is in state SUCCESS 2025-09-19 00:43:57.529346 | orchestrator | 2025-09-19 00:43:57.529362 | orchestrator | 2025-09-19 00:43:57.529373 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-19 00:43:57.529383 | orchestrator | 2025-09-19 00:43:57.530077 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 00:43:57.530100 | orchestrator | Friday 19 September 2025 00:41:42 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-09-19 00:43:57.530112 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:43:57.530125 | orchestrator | 2025-09-19 00:43:57.530135 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-19 00:43:57.530145 | orchestrator | Friday 19 September 2025 00:41:43 +0000 (0:00:01.098) 0:00:01.372 ****** 2025-09-19 00:43:57.530155 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530164 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530174 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530192 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530202 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530212 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530221 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530231 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530242 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530252 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530261 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 00:43:57.530271 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530280 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530290 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530299 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530309 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530319 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 00:43:57.530346 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530356 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530366 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530375 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 00:43:57.530385 | orchestrator | 2025-09-19 00:43:57.530394 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 00:43:57.530404 | orchestrator | Friday 19 September 2025 00:41:46 +0000 (0:00:03.774) 0:00:05.147 ****** 2025-09-19 00:43:57.530414 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:43:57.530424 | orchestrator | 2025-09-19 00:43:57.530434 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-19 00:43:57.530443 | orchestrator | Friday 19 September 2025 00:41:48 +0000 (0:00:01.311) 0:00:06.459 ****** 2025-09-19 00:43:57.530457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530523 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530595 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.530661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530722 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.530863 | orchestrator | 2025-09-19 00:43:57.530875 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-19 00:43:57.530886 | orchestrator | Friday 19 September 2025 00:41:53 +0000 (0:00:05.376) 0:00:11.835 ****** 2025-09-19 00:43:57.530899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.530911 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.530922 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.530934 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:43:57.530972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.530988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.530998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531015 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:43:57.531026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:43:57.531147 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:43:57.531157 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:43:57.531166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531196 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:43:57.531219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531260 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:43:57.531270 | orchestrator | 2025-09-19 00:43:57.531280 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-19 00:43:57.531290 | orchestrator | Friday 19 September 2025 00:41:55 +0000 (0:00:02.062) 0:00:13.897 ****** 2025-09-19 00:43:57.531300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531310 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531320 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531329 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:43:57.531339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531445 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:43:57.531473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531507 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:43:57.531517 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:43:57.531526 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:43:57.531536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531566 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:43:57.531576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 00:43:57.531595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.531622 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:43:57.531632 | orchestrator | 2025-09-19 00:43:57.531641 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-19 00:43:57.531651 | orchestrator | Friday 19 September 2025 00:41:58 +0000 (0:00:02.545) 0:00:16.443 ****** 2025-09-19 00:43:57.531660 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:43:57.531670 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:43:57.531679 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:43:57.531689 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:43:57.531698 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:43:57.531707 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:43:57.531717 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:43:57.531726 | orchestrator | 2025-09-19 00:43:57.531735 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-19 00:43:57.531745 | orchestrator | Friday 19 September 2025 00:41:59 +0000 (0:00:01.636) 0:00:18.080 ****** 2025-09-19 00:43:57.531755 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:43:57.531764 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:43:57.531773 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:43:57.531783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:43:57.531792 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:43:57.531802 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:43:57.531811 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:43:57.531820 | orchestrator | 2025-09-19 00:43:57.531855 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-19 00:43:57.531865 | orchestrator | Friday 19 September 2025 00:42:01 +0000 (0:00:01.548) 0:00:19.629 ****** 2025-09-19 00:43:57.531875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.531885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.531901 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.531911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.531931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.531945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.531956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.531966 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.531976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.531991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.532001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.532015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532040 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.532159 | orchestrator | 2025-09-19 00:43:57.532176 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-19 00:43:57.532193 | orchestrator | Friday 19 September 2025 00:42:08 +0000 (0:00:06.991) 0:00:26.620 ****** 2025-09-19 00:43:57.532210 | orchestrator | [WARNING]: Skipped 2025-09-19 00:43:57.532227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-19 00:43:57.532243 | orchestrator | to this access issue: 2025-09-19 00:43:57.532260 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-19 00:43:57.532277 | orchestrator | directory 2025-09-19 00:43:57.532293 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:43:57.532310 | orchestrator | 2025-09-19 00:43:57.532326 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-19 00:43:57.532342 | orchestrator | Friday 19 September 2025 00:42:09 +0000 (0:00:01.253) 0:00:27.874 ****** 2025-09-19 00:43:57.532359 | orchestrator | [WARNING]: Skipped 2025-09-19 00:43:57.532376 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-19 00:43:57.532393 | orchestrator | to this access issue: 2025-09-19 00:43:57.532411 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-19 00:43:57.532429 | orchestrator | directory 2025-09-19 00:43:57.532578 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:43:57.532603 | orchestrator | 2025-09-19 00:43:57.532615 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-19 00:43:57.532636 | orchestrator | Friday 19 September 2025 00:42:10 +0000 (0:00:01.023) 0:00:28.897 ****** 2025-09-19 00:43:57.532646 | orchestrator | [WARNING]: Skipped 2025-09-19 00:43:57.532656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-19 00:43:57.532665 | orchestrator | to this access issue: 2025-09-19 00:43:57.532675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-19 00:43:57.532685 | orchestrator | directory 2025-09-19 00:43:57.532696 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:43:57.532707 | orchestrator | 2025-09-19 00:43:57.532717 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-19 00:43:57.532728 | orchestrator | Friday 19 September 2025 00:42:11 +0000 (0:00:00.850) 0:00:29.748 ****** 2025-09-19 00:43:57.532739 | orchestrator | [WARNING]: Skipped 2025-09-19 00:43:57.532750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-19 00:43:57.532760 | orchestrator | to this access issue: 2025-09-19 00:43:57.532771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-19 00:43:57.532781 | orchestrator | directory 2025-09-19 00:43:57.532792 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 00:43:57.532803 | orchestrator | 2025-09-19 00:43:57.532813 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-19 00:43:57.532852 | orchestrator | Friday 19 September 2025 00:42:12 +0000 (0:00:00.755) 0:00:30.503 ****** 2025-09-19 00:43:57.532873 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.532887 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.532898 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.532909 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.532919 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.532930 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.532941 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.532951 | orchestrator | 2025-09-19 00:43:57.532961 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-19 00:43:57.532972 | orchestrator | Friday 19 September 2025 00:42:16 +0000 (0:00:04.656) 0:00:35.160 ****** 2025-09-19 00:43:57.532983 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.532994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.533004 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.533015 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.533026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.533036 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.533047 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 00:43:57.533057 | orchestrator | 2025-09-19 00:43:57.533068 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-19 00:43:57.533079 | orchestrator | Friday 19 September 2025 00:42:20 +0000 (0:00:03.850) 0:00:39.011 ****** 2025-09-19 00:43:57.533090 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.533100 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.533111 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.533121 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.533144 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.533155 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.533165 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.533176 | orchestrator | 2025-09-19 00:43:57.533187 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-19 00:43:57.533207 | orchestrator | Friday 19 September 2025 00:42:24 +0000 (0:00:04.228) 0:00:43.240 ****** 2025-09-19 00:43:57.533375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533408 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533436 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533463 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533476 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533503 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533525 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533552 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533575 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533609 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533638 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533650 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.533661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:43:57.533672 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533684 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.533695 | orchestrator | 2025-09-19 00:43:57.533706 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-19 00:43:57.533717 | orchestrator | Friday 19 September 2025 00:42:27 +0000 (0:00:02.577) 0:00:45.818 ****** 2025-09-19 00:43:57.533728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533740 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533756 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533777 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533788 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533799 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 00:43:57.533809 | orchestrator | 2025-09-19 00:43:57.533887 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-19 00:43:57.533909 | orchestrator | Friday 19 September 2025 00:42:29 +0000 (0:00:02.340) 0:00:48.158 ****** 2025-09-19 00:43:57.533926 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.533944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.533974 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.533993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.534011 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.534076 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.534097 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 00:43:57.534115 | orchestrator | 2025-09-19 00:43:57.534134 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-19 00:43:57.534151 | orchestrator | Friday 19 September 2025 00:42:31 +0000 (0:00:01.998) 0:00:50.157 ****** 2025-09-19 00:43:57.534169 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534309 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 00:43:57.534347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534441 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:43:57.534619 | orchestrator | 2025-09-19 00:43:57.534637 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-19 00:43:57.534655 | orchestrator | Friday 19 September 2025 00:42:34 +0000 (0:00:02.969) 0:00:53.126 ****** 2025-09-19 00:43:57.534678 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.534696 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.534713 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.534730 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.534748 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.534765 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.534781 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.534797 | orchestrator | 2025-09-19 00:43:57.534821 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-19 00:43:57.534908 | orchestrator | Friday 19 September 2025 00:42:36 +0000 (0:00:01.576) 0:00:54.702 ****** 2025-09-19 00:43:57.534925 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.534941 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.534958 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.534975 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.534991 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.535007 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.535017 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.535027 | orchestrator | 2025-09-19 00:43:57.535038 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535055 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:01.028) 0:00:55.731 ****** 2025-09-19 00:43:57.535071 | orchestrator | 2025-09-19 00:43:57.535086 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535103 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.066) 0:00:55.798 ****** 2025-09-19 00:43:57.535119 | orchestrator | 2025-09-19 00:43:57.535136 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535153 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.059) 0:00:55.857 ****** 2025-09-19 00:43:57.535167 | orchestrator | 2025-09-19 00:43:57.535177 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535186 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.193) 0:00:56.051 ****** 2025-09-19 00:43:57.535196 | orchestrator | 2025-09-19 00:43:57.535206 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535225 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.058) 0:00:56.109 ****** 2025-09-19 00:43:57.535234 | orchestrator | 2025-09-19 00:43:57.535244 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535253 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.059) 0:00:56.168 ****** 2025-09-19 00:43:57.535263 | orchestrator | 2025-09-19 00:43:57.535273 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 00:43:57.535282 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.058) 0:00:56.227 ****** 2025-09-19 00:43:57.535291 | orchestrator | 2025-09-19 00:43:57.535301 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-19 00:43:57.535311 | orchestrator | Friday 19 September 2025 00:42:38 +0000 (0:00:00.077) 0:00:56.305 ****** 2025-09-19 00:43:57.535320 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.535330 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.535340 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.535349 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.535359 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.535368 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.535378 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.535387 | orchestrator | 2025-09-19 00:43:57.535397 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-19 00:43:57.535406 | orchestrator | Friday 19 September 2025 00:43:12 +0000 (0:00:34.665) 0:01:30.970 ****** 2025-09-19 00:43:57.535416 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.535425 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.535434 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.535444 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.535453 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.535462 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.535469 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.535477 | orchestrator | 2025-09-19 00:43:57.535485 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-19 00:43:57.535493 | orchestrator | Friday 19 September 2025 00:43:44 +0000 (0:00:32.093) 0:02:03.064 ****** 2025-09-19 00:43:57.535500 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:43:57.535509 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:43:57.535516 | orchestrator | ok: [testbed-manager] 2025-09-19 00:43:57.535524 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:43:57.535531 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:43:57.535539 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:43:57.535547 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:43:57.535555 | orchestrator | 2025-09-19 00:43:57.535563 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-19 00:43:57.535570 | orchestrator | Friday 19 September 2025 00:43:46 +0000 (0:00:01.989) 0:02:05.054 ****** 2025-09-19 00:43:57.535578 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:43:57.535586 | orchestrator | changed: [testbed-manager] 2025-09-19 00:43:57.535593 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:43:57.535601 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:43:57.535609 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:43:57.535616 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:43:57.535624 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:43:57.535632 | orchestrator | 2025-09-19 00:43:57.535639 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:43:57.535648 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535658 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535665 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535686 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535694 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535709 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535723 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 00:43:57.535736 | orchestrator | 2025-09-19 00:43:57.535749 | orchestrator | 2025-09-19 00:43:57.535769 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:43:57.535785 | orchestrator | Friday 19 September 2025 00:43:56 +0000 (0:00:09.302) 0:02:14.357 ****** 2025-09-19 00:43:57.535799 | orchestrator | =============================================================================== 2025-09-19 00:43:57.535810 | orchestrator | common : Restart fluentd container ------------------------------------- 34.67s 2025-09-19 00:43:57.535818 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.09s 2025-09-19 00:43:57.535849 | orchestrator | common : Restart cron container ----------------------------------------- 9.30s 2025-09-19 00:43:57.535859 | orchestrator | common : Copying over config.json files for services -------------------- 6.99s 2025-09-19 00:43:57.535873 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.38s 2025-09-19 00:43:57.535887 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.66s 2025-09-19 00:43:57.535899 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.23s 2025-09-19 00:43:57.535918 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.85s 2025-09-19 00:43:57.535934 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.77s 2025-09-19 00:43:57.535948 | orchestrator | common : Check common containers ---------------------------------------- 2.97s 2025-09-19 00:43:57.535962 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.58s 2025-09-19 00:43:57.535971 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.55s 2025-09-19 00:43:57.535979 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.34s 2025-09-19 00:43:57.535987 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.06s 2025-09-19 00:43:57.535995 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.00s 2025-09-19 00:43:57.536002 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.99s 2025-09-19 00:43:57.536010 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.64s 2025-09-19 00:43:57.536018 | orchestrator | common : Creating log volume -------------------------------------------- 1.58s 2025-09-19 00:43:57.536025 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.55s 2025-09-19 00:43:57.536033 | orchestrator | common : include_tasks -------------------------------------------------- 1.31s 2025-09-19 00:43:57.536041 | orchestrator | 2025-09-19 00:43:57 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:43:57.536049 | orchestrator | 2025-09-19 00:43:57 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:43:57.536057 | orchestrator | 2025-09-19 00:43:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:00.565300 | orchestrator | 2025-09-19 00:44:00 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:00.565522 | orchestrator | 2025-09-19 00:44:00 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:00.567625 | orchestrator | 2025-09-19 00:44:00 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:00.568266 | orchestrator | 2025-09-19 00:44:00 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:00.569029 | orchestrator | 2025-09-19 00:44:00 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:00.570404 | orchestrator | 2025-09-19 00:44:00 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state STARTED 2025-09-19 00:44:00.570429 | orchestrator | 2025-09-19 00:44:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:03.592915 | orchestrator | 2025-09-19 00:44:03 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:03.593116 | orchestrator | 2025-09-19 00:44:03 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:03.593621 | orchestrator | 2025-09-19 00:44:03 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:03.594174 | orchestrator | 2025-09-19 00:44:03 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:03.594667 | orchestrator | 2025-09-19 00:44:03 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:03.595123 | orchestrator | 2025-09-19 00:44:03 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state STARTED 2025-09-19 00:44:03.595145 | orchestrator | 2025-09-19 00:44:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:06.619958 | orchestrator | 2025-09-19 00:44:06 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:06.620046 | orchestrator | 2025-09-19 00:44:06 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:06.622343 | orchestrator | 2025-09-19 00:44:06 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:06.622414 | orchestrator | 2025-09-19 00:44:06 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:06.622430 | orchestrator | 2025-09-19 00:44:06 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:06.624158 | orchestrator | 2025-09-19 00:44:06 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state STARTED 2025-09-19 00:44:06.624203 | orchestrator | 2025-09-19 00:44:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:09.659628 | orchestrator | 2025-09-19 00:44:09 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:09.661929 | orchestrator | 2025-09-19 00:44:09 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:09.665533 | orchestrator | 2025-09-19 00:44:09 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:09.667162 | orchestrator | 2025-09-19 00:44:09 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:09.668778 | orchestrator | 2025-09-19 00:44:09 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:09.669650 | orchestrator | 2025-09-19 00:44:09 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state STARTED 2025-09-19 00:44:09.669689 | orchestrator | 2025-09-19 00:44:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:12.697192 | orchestrator | 2025-09-19 00:44:12 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:12.697278 | orchestrator | 2025-09-19 00:44:12 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:12.697635 | orchestrator | 2025-09-19 00:44:12 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:12.698073 | orchestrator | 2025-09-19 00:44:12 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:12.698689 | orchestrator | 2025-09-19 00:44:12 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:12.699286 | orchestrator | 2025-09-19 00:44:12 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state STARTED 2025-09-19 00:44:12.699309 | orchestrator | 2025-09-19 00:44:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:15.725873 | orchestrator | 2025-09-19 00:44:15 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:15.725965 | orchestrator | 2025-09-19 00:44:15 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:15.725974 | orchestrator | 2025-09-19 00:44:15 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:15.726297 | orchestrator | 2025-09-19 00:44:15 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:15.727000 | orchestrator | 2025-09-19 00:44:15 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:15.727901 | orchestrator | 2025-09-19 00:44:15 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state STARTED 2025-09-19 00:44:15.727920 | orchestrator | 2025-09-19 00:44:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:18.763423 | orchestrator | 2025-09-19 00:44:18 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:18.764535 | orchestrator | 2025-09-19 00:44:18 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:18.765944 | orchestrator | 2025-09-19 00:44:18 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:18.768586 | orchestrator | 2025-09-19 00:44:18 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:18.770002 | orchestrator | 2025-09-19 00:44:18 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:18.770676 | orchestrator | 2025-09-19 00:44:18 | INFO  | Task 40b185dc-2c80-4088-895a-3e1affa351a2 is in state SUCCESS 2025-09-19 00:44:18.773092 | orchestrator | 2025-09-19 00:44:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:21.806501 | orchestrator | 2025-09-19 00:44:21 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state STARTED 2025-09-19 00:44:21.807925 | orchestrator | 2025-09-19 00:44:21 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:21.807969 | orchestrator | 2025-09-19 00:44:21 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:21.808453 | orchestrator | 2025-09-19 00:44:21 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:21.809207 | orchestrator | 2025-09-19 00:44:21 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:21.812025 | orchestrator | 2025-09-19 00:44:21 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:21.812057 | orchestrator | 2025-09-19 00:44:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:24.894329 | orchestrator | 2025-09-19 00:44:24 | INFO  | Task e0f96b4f-8e63-4607-9742-76fa0e870bbc is in state SUCCESS 2025-09-19 00:44:24.895203 | orchestrator | 2025-09-19 00:44:24.895226 | orchestrator | 2025-09-19 00:44:24.895233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:44:24.895253 | orchestrator | 2025-09-19 00:44:24.895259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:44:24.895266 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.257) 0:00:00.258 ****** 2025-09-19 00:44:24.895272 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:44:24.895279 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:44:24.895286 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:44:24.895292 | orchestrator | 2025-09-19 00:44:24.895298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:44:24.895305 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.510) 0:00:00.768 ****** 2025-09-19 00:44:24.895311 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-19 00:44:24.895318 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-19 00:44:24.895324 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-19 00:44:24.895330 | orchestrator | 2025-09-19 00:44:24.895336 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-19 00:44:24.895341 | orchestrator | 2025-09-19 00:44:24.895346 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-19 00:44:24.895352 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:00.916) 0:00:01.685 ****** 2025-09-19 00:44:24.895357 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:44:24.895363 | orchestrator | 2025-09-19 00:44:24.895368 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-19 00:44:24.895373 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:00.884) 0:00:02.570 ****** 2025-09-19 00:44:24.895379 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 00:44:24.895384 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 00:44:24.895389 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 00:44:24.895395 | orchestrator | 2025-09-19 00:44:24.895400 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-19 00:44:24.895405 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:00.944) 0:00:03.514 ****** 2025-09-19 00:44:24.895411 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 00:44:24.895416 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 00:44:24.895421 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 00:44:24.895427 | orchestrator | 2025-09-19 00:44:24.895432 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-19 00:44:24.895437 | orchestrator | Friday 19 September 2025 00:44:08 +0000 (0:00:02.166) 0:00:05.680 ****** 2025-09-19 00:44:24.895442 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:44:24.895448 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:44:24.895454 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:44:24.895459 | orchestrator | 2025-09-19 00:44:24.895464 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-19 00:44:24.895470 | orchestrator | Friday 19 September 2025 00:44:10 +0000 (0:00:02.088) 0:00:07.769 ****** 2025-09-19 00:44:24.895475 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:44:24.895480 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:44:24.895485 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:44:24.895490 | orchestrator | 2025-09-19 00:44:24.895496 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:44:24.895501 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:44:24.895508 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:44:24.895513 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:44:24.895522 | orchestrator | 2025-09-19 00:44:24.895527 | orchestrator | 2025-09-19 00:44:24.895533 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:44:24.895538 | orchestrator | Friday 19 September 2025 00:44:18 +0000 (0:00:07.378) 0:00:15.147 ****** 2025-09-19 00:44:24.895544 | orchestrator | =============================================================================== 2025-09-19 00:44:24.895549 | orchestrator | memcached : Restart memcached container --------------------------------- 7.38s 2025-09-19 00:44:24.895554 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.17s 2025-09-19 00:44:24.895560 | orchestrator | memcached : Check memcached container ----------------------------------- 2.09s 2025-09-19 00:44:24.895565 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.94s 2025-09-19 00:44:24.895570 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-09-19 00:44:24.895575 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.88s 2025-09-19 00:44:24.895581 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-09-19 00:44:24.895586 | orchestrator | 2025-09-19 00:44:24.895591 | orchestrator | 2025-09-19 00:44:24.895597 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:44:24.895602 | orchestrator | 2025-09-19 00:44:24.895607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:44:24.895612 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.457) 0:00:00.457 ****** 2025-09-19 00:44:24.895618 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:44:24.895623 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:44:24.895628 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:44:24.895634 | orchestrator | 2025-09-19 00:44:24.895639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:44:24.895651 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.402) 0:00:00.859 ****** 2025-09-19 00:44:24.895657 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-19 00:44:24.895662 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-19 00:44:24.895667 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-19 00:44:24.895673 | orchestrator | 2025-09-19 00:44:24.895678 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-19 00:44:24.895683 | orchestrator | 2025-09-19 00:44:24.895688 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-19 00:44:24.895694 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:00.976) 0:00:01.835 ****** 2025-09-19 00:44:24.895699 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:44:24.895705 | orchestrator | 2025-09-19 00:44:24.895710 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-19 00:44:24.895715 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:00.591) 0:00:02.427 ****** 2025-09-19 00:44:24.895723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895781 | orchestrator | 2025-09-19 00:44:24.895786 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-19 00:44:24.895831 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:01.601) 0:00:04.029 ****** 2025-09-19 00:44:24.895838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895884 | orchestrator | 2025-09-19 00:44:24.895889 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-19 00:44:24.895895 | orchestrator | Friday 19 September 2025 00:44:09 +0000 (0:00:03.148) 0:00:07.177 ****** 2025-09-19 00:44:24.895900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895944 | orchestrator | 2025-09-19 00:44:24.895950 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-19 00:44:24.895955 | orchestrator | Friday 19 September 2025 00:44:12 +0000 (0:00:02.852) 0:00:10.029 ****** 2025-09-19 00:44:24.895961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.895993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.896002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 00:44:24.896008 | orchestrator | 2025-09-19 00:44:24.896013 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 00:44:24.896019 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:01.615) 0:00:11.645 ****** 2025-09-19 00:44:24.896024 | orchestrator | 2025-09-19 00:44:24.896030 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 00:44:24.896035 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:00.073) 0:00:11.718 ****** 2025-09-19 00:44:24.896040 | orchestrator | 2025-09-19 00:44:24.896046 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 00:44:24.896054 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:00.073) 0:00:11.792 ****** 2025-09-19 00:44:24.896060 | orchestrator | 2025-09-19 00:44:24.896065 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-19 00:44:24.896070 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:00.061) 0:00:11.854 ****** 2025-09-19 00:44:24.896076 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:44:24.896081 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:44:24.896086 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:44:24.896092 | orchestrator | 2025-09-19 00:44:24.896097 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-19 00:44:24.896102 | orchestrator | Friday 19 September 2025 00:44:18 +0000 (0:00:03.723) 0:00:15.578 ****** 2025-09-19 00:44:24.896108 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:44:24.896113 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:44:24.896118 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:44:24.896123 | orchestrator | 2025-09-19 00:44:24.896129 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:44:24.896134 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:44:24.896140 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:44:24.896145 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:44:24.896151 | orchestrator | 2025-09-19 00:44:24.896156 | orchestrator | 2025-09-19 00:44:24.896161 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:44:24.896166 | orchestrator | Friday 19 September 2025 00:44:22 +0000 (0:00:04.256) 0:00:19.835 ****** 2025-09-19 00:44:24.896172 | orchestrator | =============================================================================== 2025-09-19 00:44:24.896177 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.26s 2025-09-19 00:44:24.896182 | orchestrator | redis : Restart redis container ----------------------------------------- 3.72s 2025-09-19 00:44:24.896188 | orchestrator | redis : Copying over default config.json files -------------------------- 3.15s 2025-09-19 00:44:24.896193 | orchestrator | redis : Copying over redis config files --------------------------------- 2.85s 2025-09-19 00:44:24.896198 | orchestrator | redis : Check redis containers ------------------------------------------ 1.62s 2025-09-19 00:44:24.896203 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.60s 2025-09-19 00:44:24.896209 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-09-19 00:44:24.896214 | orchestrator | redis : include_tasks --------------------------------------------------- 0.59s 2025-09-19 00:44:24.896219 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-09-19 00:44:24.896225 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-09-19 00:44:24.896384 | orchestrator | 2025-09-19 00:44:24 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:24.899114 | orchestrator | 2025-09-19 00:44:24 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:24.900767 | orchestrator | 2025-09-19 00:44:24 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:24.901686 | orchestrator | 2025-09-19 00:44:24 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:24.905493 | orchestrator | 2025-09-19 00:44:24 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:24.905527 | orchestrator | 2025-09-19 00:44:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:27.930586 | orchestrator | 2025-09-19 00:44:27 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:27.933911 | orchestrator | 2025-09-19 00:44:27 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:27.934836 | orchestrator | 2025-09-19 00:44:27 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:27.936066 | orchestrator | 2025-09-19 00:44:27 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:27.937174 | orchestrator | 2025-09-19 00:44:27 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:27.937496 | orchestrator | 2025-09-19 00:44:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:30.972361 | orchestrator | 2025-09-19 00:44:30 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:30.972445 | orchestrator | 2025-09-19 00:44:30 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:30.972460 | orchestrator | 2025-09-19 00:44:30 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:30.972779 | orchestrator | 2025-09-19 00:44:30 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:30.973467 | orchestrator | 2025-09-19 00:44:30 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:30.973490 | orchestrator | 2025-09-19 00:44:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:34.012633 | orchestrator | 2025-09-19 00:44:34 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:34.013072 | orchestrator | 2025-09-19 00:44:34 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:34.013873 | orchestrator | 2025-09-19 00:44:34 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:34.016833 | orchestrator | 2025-09-19 00:44:34 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:34.019660 | orchestrator | 2025-09-19 00:44:34 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:34.019693 | orchestrator | 2025-09-19 00:44:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:37.081955 | orchestrator | 2025-09-19 00:44:37 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:37.087122 | orchestrator | 2025-09-19 00:44:37 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:37.087167 | orchestrator | 2025-09-19 00:44:37 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:37.087179 | orchestrator | 2025-09-19 00:44:37 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:37.087191 | orchestrator | 2025-09-19 00:44:37 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:37.087202 | orchestrator | 2025-09-19 00:44:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:40.132476 | orchestrator | 2025-09-19 00:44:40 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:40.132715 | orchestrator | 2025-09-19 00:44:40 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:40.134155 | orchestrator | 2025-09-19 00:44:40 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:40.135720 | orchestrator | 2025-09-19 00:44:40 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:40.136466 | orchestrator | 2025-09-19 00:44:40 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:40.136518 | orchestrator | 2025-09-19 00:44:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:43.193066 | orchestrator | 2025-09-19 00:44:43 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:43.197948 | orchestrator | 2025-09-19 00:44:43 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:43.198757 | orchestrator | 2025-09-19 00:44:43 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:43.199583 | orchestrator | 2025-09-19 00:44:43 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:43.200495 | orchestrator | 2025-09-19 00:44:43 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:43.200568 | orchestrator | 2025-09-19 00:44:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:46.234135 | orchestrator | 2025-09-19 00:44:46 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:46.234320 | orchestrator | 2025-09-19 00:44:46 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:46.234808 | orchestrator | 2025-09-19 00:44:46 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:46.235534 | orchestrator | 2025-09-19 00:44:46 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:46.236266 | orchestrator | 2025-09-19 00:44:46 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:46.236292 | orchestrator | 2025-09-19 00:44:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:49.274325 | orchestrator | 2025-09-19 00:44:49 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:49.276489 | orchestrator | 2025-09-19 00:44:49 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:49.276525 | orchestrator | 2025-09-19 00:44:49 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:49.276537 | orchestrator | 2025-09-19 00:44:49 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:49.277347 | orchestrator | 2025-09-19 00:44:49 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:49.277379 | orchestrator | 2025-09-19 00:44:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:52.313218 | orchestrator | 2025-09-19 00:44:52 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:52.313625 | orchestrator | 2025-09-19 00:44:52 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:52.314004 | orchestrator | 2025-09-19 00:44:52 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:52.314600 | orchestrator | 2025-09-19 00:44:52 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:52.317564 | orchestrator | 2025-09-19 00:44:52 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:52.317603 | orchestrator | 2025-09-19 00:44:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:55.359082 | orchestrator | 2025-09-19 00:44:55 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:55.359831 | orchestrator | 2025-09-19 00:44:55 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:55.360557 | orchestrator | 2025-09-19 00:44:55 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:55.361336 | orchestrator | 2025-09-19 00:44:55 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:55.362123 | orchestrator | 2025-09-19 00:44:55 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:55.362144 | orchestrator | 2025-09-19 00:44:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:44:58.394937 | orchestrator | 2025-09-19 00:44:58 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:44:58.395024 | orchestrator | 2025-09-19 00:44:58 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:44:58.395782 | orchestrator | 2025-09-19 00:44:58 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:44:58.396484 | orchestrator | 2025-09-19 00:44:58 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:44:58.397770 | orchestrator | 2025-09-19 00:44:58 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:44:58.397798 | orchestrator | 2025-09-19 00:44:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:01.529309 | orchestrator | 2025-09-19 00:45:01 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:01.529399 | orchestrator | 2025-09-19 00:45:01 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:01.534549 | orchestrator | 2025-09-19 00:45:01 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:01.535002 | orchestrator | 2025-09-19 00:45:01 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:01.535843 | orchestrator | 2025-09-19 00:45:01 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:45:01.535875 | orchestrator | 2025-09-19 00:45:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:04.559791 | orchestrator | 2025-09-19 00:45:04 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:04.560843 | orchestrator | 2025-09-19 00:45:04 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:04.561708 | orchestrator | 2025-09-19 00:45:04 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:04.562645 | orchestrator | 2025-09-19 00:45:04 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:04.563781 | orchestrator | 2025-09-19 00:45:04 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:45:04.563825 | orchestrator | 2025-09-19 00:45:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:07.591166 | orchestrator | 2025-09-19 00:45:07 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:07.591474 | orchestrator | 2025-09-19 00:45:07 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:07.592149 | orchestrator | 2025-09-19 00:45:07 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:07.592881 | orchestrator | 2025-09-19 00:45:07 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:07.593597 | orchestrator | 2025-09-19 00:45:07 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state STARTED 2025-09-19 00:45:07.593623 | orchestrator | 2025-09-19 00:45:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:10.700154 | orchestrator | 2025-09-19 00:45:10.700251 | orchestrator | 2025-09-19 00:45:10.700266 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:45:10.700293 | orchestrator | 2025-09-19 00:45:10.700303 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:45:10.700313 | orchestrator | Friday 19 September 2025 00:44:02 +0000 (0:00:00.400) 0:00:00.400 ****** 2025-09-19 00:45:10.700327 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:10.700338 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:10.700347 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:10.700356 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:10.700366 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:10.700375 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:10.700384 | orchestrator | 2025-09-19 00:45:10.700394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:45:10.700404 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.822) 0:00:01.222 ****** 2025-09-19 00:45:10.700413 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 00:45:10.700423 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 00:45:10.700432 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 00:45:10.700442 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 00:45:10.700451 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 00:45:10.700461 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 00:45:10.700470 | orchestrator | 2025-09-19 00:45:10.700479 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-19 00:45:10.700489 | orchestrator | 2025-09-19 00:45:10.700498 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-19 00:45:10.700508 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:01.174) 0:00:02.397 ****** 2025-09-19 00:45:10.700518 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:45:10.700528 | orchestrator | 2025-09-19 00:45:10.700538 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 00:45:10.700547 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:01.657) 0:00:04.054 ****** 2025-09-19 00:45:10.700557 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 00:45:10.700566 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 00:45:10.700576 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 00:45:10.700585 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 00:45:10.700594 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 00:45:10.700604 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 00:45:10.700613 | orchestrator | 2025-09-19 00:45:10.700622 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 00:45:10.700632 | orchestrator | Friday 19 September 2025 00:44:08 +0000 (0:00:01.711) 0:00:05.766 ****** 2025-09-19 00:45:10.700642 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 00:45:10.700659 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 00:45:10.700669 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 00:45:10.700679 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 00:45:10.700688 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 00:45:10.700698 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 00:45:10.700709 | orchestrator | 2025-09-19 00:45:10.700720 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 00:45:10.700731 | orchestrator | Friday 19 September 2025 00:44:09 +0000 (0:00:01.758) 0:00:07.524 ****** 2025-09-19 00:45:10.700796 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-19 00:45:10.700807 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:10.700826 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-19 00:45:10.700836 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:10.700846 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-19 00:45:10.700855 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-19 00:45:10.700865 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:10.700874 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-19 00:45:10.700896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:10.700906 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:10.700916 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-19 00:45:10.700925 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:10.700934 | orchestrator | 2025-09-19 00:45:10.700944 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-19 00:45:10.700953 | orchestrator | Friday 19 September 2025 00:44:11 +0000 (0:00:01.236) 0:00:08.760 ****** 2025-09-19 00:45:10.700963 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:10.700972 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:10.700982 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:10.700991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:10.701001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:10.701010 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:10.701019 | orchestrator | 2025-09-19 00:45:10.701029 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-19 00:45:10.701038 | orchestrator | Friday 19 September 2025 00:44:11 +0000 (0:00:00.627) 0:00:09.387 ****** 2025-09-19 00:45:10.701066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701218 | orchestrator | 2025-09-19 00:45:10.701228 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-19 00:45:10.701238 | orchestrator | Friday 19 September 2025 00:44:13 +0000 (0:00:01.520) 0:00:10.908 ****** 2025-09-19 00:45:10.701248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701405 | orchestrator | 2025-09-19 00:45:10.701415 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-19 00:45:10.701425 | orchestrator | Friday 19 September 2025 00:44:15 +0000 (0:00:02.731) 0:00:13.639 ****** 2025-09-19 00:45:10.701435 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:10.701444 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:10.701454 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:10.701463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:10.701472 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:10.701482 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:10.701496 | orchestrator | 2025-09-19 00:45:10.701506 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-19 00:45:10.701516 | orchestrator | Friday 19 September 2025 00:44:16 +0000 (0:00:00.769) 0:00:14.409 ****** 2025-09-19 00:45:10.701526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 00:45:10.701687 | orchestrator | 2025-09-19 00:45:10.701697 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 00:45:10.701707 | orchestrator | Friday 19 September 2025 00:44:19 +0000 (0:00:03.099) 0:00:17.509 ****** 2025-09-19 00:45:10.701717 | orchestrator | 2025-09-19 00:45:10.701726 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 00:45:10.701762 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.338) 0:00:17.848 ****** 2025-09-19 00:45:10.701773 | orchestrator | 2025-09-19 00:45:10.701782 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 00:45:10.701792 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.144) 0:00:17.993 ****** 2025-09-19 00:45:10.701801 | orchestrator | 2025-09-19 00:45:10.701815 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 00:45:10.701825 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.147) 0:00:18.140 ****** 2025-09-19 00:45:10.701834 | orchestrator | 2025-09-19 00:45:10.701844 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 00:45:10.701853 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.142) 0:00:18.283 ****** 2025-09-19 00:45:10.701863 | orchestrator | 2025-09-19 00:45:10.701872 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 00:45:10.701882 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.127) 0:00:18.411 ****** 2025-09-19 00:45:10.701892 | orchestrator | 2025-09-19 00:45:10.701901 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-19 00:45:10.701910 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.165) 0:00:18.576 ****** 2025-09-19 00:45:10.701920 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:10.701929 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:10.701939 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:10.701948 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:10.701957 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:10.701967 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:10.701976 | orchestrator | 2025-09-19 00:45:10.701986 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-19 00:45:10.701995 | orchestrator | Friday 19 September 2025 00:44:31 +0000 (0:00:11.078) 0:00:29.654 ****** 2025-09-19 00:45:10.702005 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:10.702079 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:10.702092 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:10.702102 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:10.702112 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:10.702121 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:10.702131 | orchestrator | 2025-09-19 00:45:10.702148 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 00:45:10.702158 | orchestrator | Friday 19 September 2025 00:44:33 +0000 (0:00:01.737) 0:00:31.392 ****** 2025-09-19 00:45:10.702173 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:10.702183 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:10.702192 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:10.702202 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:10.702211 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:10.702221 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:10.702230 | orchestrator | 2025-09-19 00:45:10.702239 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-19 00:45:10.702249 | orchestrator | Friday 19 September 2025 00:44:44 +0000 (0:00:10.540) 0:00:41.932 ****** 2025-09-19 00:45:10.702266 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-19 00:45:10.702276 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-19 00:45:10.702286 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-19 00:45:10.702295 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-19 00:45:10.702305 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-19 00:45:10.702314 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-19 00:45:10.702324 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-19 00:45:10.702333 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-19 00:45:10.702342 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-19 00:45:10.702352 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-19 00:45:10.702361 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-19 00:45:10.702371 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-19 00:45:10.702380 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 00:45:10.702389 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 00:45:10.702399 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 00:45:10.702408 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 00:45:10.702417 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 00:45:10.702427 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 00:45:10.702436 | orchestrator | 2025-09-19 00:45:10.702446 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-19 00:45:10.702455 | orchestrator | Friday 19 September 2025 00:44:51 +0000 (0:00:07.371) 0:00:49.303 ****** 2025-09-19 00:45:10.702469 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-19 00:45:10.702479 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:10.702488 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-19 00:45:10.702498 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:10.702507 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-19 00:45:10.702516 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:10.702526 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-19 00:45:10.702542 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-19 00:45:10.702551 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-19 00:45:10.702560 | orchestrator | 2025-09-19 00:45:10.702570 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-19 00:45:10.702579 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:03.054) 0:00:52.358 ****** 2025-09-19 00:45:10.702589 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-19 00:45:10.702598 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-19 00:45:10.702608 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:10.702617 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-19 00:45:10.702627 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:10.702636 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:10.702646 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-19 00:45:10.702655 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-19 00:45:10.702665 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-19 00:45:10.702674 | orchestrator | 2025-09-19 00:45:10.702689 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 00:45:10.702705 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:03.960) 0:00:56.319 ****** 2025-09-19 00:45:10.702721 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:10.702759 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:10.702778 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:10.702794 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:10.702812 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:10.702822 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:10.702831 | orchestrator | 2025-09-19 00:45:10.702841 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:45:10.702851 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:45:10.702868 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:45:10.702878 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:45:10.702888 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:45:10.702897 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:45:10.702907 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:45:10.702916 | orchestrator | 2025-09-19 00:45:10.702925 | orchestrator | 2025-09-19 00:45:10.702935 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:45:10.702944 | orchestrator | Friday 19 September 2025 00:45:07 +0000 (0:00:08.828) 0:01:05.147 ****** 2025-09-19 00:45:10.702954 | orchestrator | =============================================================================== 2025-09-19 00:45:10.702963 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.37s 2025-09-19 00:45:10.702973 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.08s 2025-09-19 00:45:10.702982 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.37s 2025-09-19 00:45:10.702992 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.96s 2025-09-19 00:45:10.703001 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.10s 2025-09-19 00:45:10.703010 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.05s 2025-09-19 00:45:10.703026 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.73s 2025-09-19 00:45:10.703036 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.76s 2025-09-19 00:45:10.703045 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.74s 2025-09-19 00:45:10.703054 | orchestrator | module-load : Load modules ---------------------------------------------- 1.71s 2025-09-19 00:45:10.703064 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.66s 2025-09-19 00:45:10.703073 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.52s 2025-09-19 00:45:10.703083 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2025-09-19 00:45:10.703092 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2025-09-19 00:45:10.703101 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.07s 2025-09-19 00:45:10.703110 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2025-09-19 00:45:10.703124 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.77s 2025-09-19 00:45:10.703134 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2025-09-19 00:45:10.703143 | orchestrator | 2025-09-19 00:45:10 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:10.703153 | orchestrator | 2025-09-19 00:45:10 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:10.703163 | orchestrator | 2025-09-19 00:45:10 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:10.703172 | orchestrator | 2025-09-19 00:45:10 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:10.703182 | orchestrator | 2025-09-19 00:45:10 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:10.703205 | orchestrator | 2025-09-19 00:45:10 | INFO  | Task 57487ebf-dc81-4259-b1c8-121ae29a41c6 is in state SUCCESS 2025-09-19 00:45:10.703216 | orchestrator | 2025-09-19 00:45:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:13.917673 | orchestrator | 2025-09-19 00:45:13 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:13.941793 | orchestrator | 2025-09-19 00:45:13 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:13.942726 | orchestrator | 2025-09-19 00:45:13 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:13.943474 | orchestrator | 2025-09-19 00:45:13 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:13.945342 | orchestrator | 2025-09-19 00:45:13 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:13.945369 | orchestrator | 2025-09-19 00:45:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:16.989332 | orchestrator | 2025-09-19 00:45:16 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:16.989420 | orchestrator | 2025-09-19 00:45:16 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:16.989436 | orchestrator | 2025-09-19 00:45:16 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:16.989448 | orchestrator | 2025-09-19 00:45:16 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:16.989458 | orchestrator | 2025-09-19 00:45:16 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:16.989469 | orchestrator | 2025-09-19 00:45:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:20.196232 | orchestrator | 2025-09-19 00:45:20 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:20.196675 | orchestrator | 2025-09-19 00:45:20 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:20.197464 | orchestrator | 2025-09-19 00:45:20 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state STARTED 2025-09-19 00:45:20.197965 | orchestrator | 2025-09-19 00:45:20 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:20.199761 | orchestrator | 2025-09-19 00:45:20 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:20.199823 | orchestrator | 2025-09-19 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:23.230822 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:23.230910 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task c32475ea-705b-4351-a3ca-f2a145dfdeff is in state STARTED 2025-09-19 00:45:23.233587 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:23.234751 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task b0dec748-1f54-408c-a4dd-982fb90afc4a is in state SUCCESS 2025-09-19 00:45:23.236660 | orchestrator | 2025-09-19 00:45:23.236711 | orchestrator | 2025-09-19 00:45:23.236747 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-19 00:45:23.236770 | orchestrator | 2025-09-19 00:45:23.236781 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-19 00:45:23.236792 | orchestrator | Friday 19 September 2025 00:41:42 +0000 (0:00:00.157) 0:00:00.157 ****** 2025-09-19 00:45:23.236803 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.236815 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.236826 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.236837 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.236848 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.236858 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.236869 | orchestrator | 2025-09-19 00:45:23.236880 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-19 00:45:23.236898 | orchestrator | Friday 19 September 2025 00:41:43 +0000 (0:00:00.699) 0:00:00.856 ****** 2025-09-19 00:45:23.236910 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.236921 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.236931 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.236942 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.236952 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.236963 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.236974 | orchestrator | 2025-09-19 00:45:23.236984 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-19 00:45:23.236995 | orchestrator | Friday 19 September 2025 00:41:44 +0000 (0:00:00.603) 0:00:01.459 ****** 2025-09-19 00:45:23.237006 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.237017 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.237027 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.237038 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.237048 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.237059 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.237070 | orchestrator | 2025-09-19 00:45:23.237080 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-19 00:45:23.237091 | orchestrator | Friday 19 September 2025 00:41:44 +0000 (0:00:00.573) 0:00:02.033 ****** 2025-09-19 00:45:23.237102 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.237112 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.237123 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.237133 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.237163 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.237175 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.237185 | orchestrator | 2025-09-19 00:45:23.237196 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-19 00:45:23.237207 | orchestrator | Friday 19 September 2025 00:41:46 +0000 (0:00:02.159) 0:00:04.192 ****** 2025-09-19 00:45:23.237219 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.237231 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.237243 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.237255 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.237267 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.237279 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.237292 | orchestrator | 2025-09-19 00:45:23.237304 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-19 00:45:23.237316 | orchestrator | Friday 19 September 2025 00:41:47 +0000 (0:00:01.136) 0:00:05.329 ****** 2025-09-19 00:45:23.237329 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.237341 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.237353 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.237364 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.237374 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.237384 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.237395 | orchestrator | 2025-09-19 00:45:23.237405 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-19 00:45:23.237416 | orchestrator | Friday 19 September 2025 00:41:49 +0000 (0:00:01.630) 0:00:06.960 ****** 2025-09-19 00:45:23.237427 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.237438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.237448 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.237459 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.237469 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.237480 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.237490 | orchestrator | 2025-09-19 00:45:23.237501 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-19 00:45:23.237512 | orchestrator | Friday 19 September 2025 00:41:50 +0000 (0:00:01.163) 0:00:08.123 ****** 2025-09-19 00:45:23.237522 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.237533 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.237544 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.237554 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.237565 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.237575 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.237586 | orchestrator | 2025-09-19 00:45:23.237597 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-19 00:45:23.237607 | orchestrator | Friday 19 September 2025 00:41:51 +0000 (0:00:00.778) 0:00:08.902 ****** 2025-09-19 00:45:23.237618 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 00:45:23.237629 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 00:45:23.237639 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.237650 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 00:45:23.237661 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 00:45:23.237671 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.237682 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 00:45:23.237692 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 00:45:23.237703 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.237714 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 00:45:23.237753 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 00:45:23.237771 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.237782 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 00:45:23.237793 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 00:45:23.237804 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.237815 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 00:45:23.237825 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 00:45:23.237836 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.237847 | orchestrator | 2025-09-19 00:45:23.237858 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-19 00:45:23.237873 | orchestrator | Friday 19 September 2025 00:41:52 +0000 (0:00:00.776) 0:00:09.679 ****** 2025-09-19 00:45:23.237884 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.237895 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.237906 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.237917 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.237927 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.237938 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.237949 | orchestrator | 2025-09-19 00:45:23.237960 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-19 00:45:23.237971 | orchestrator | Friday 19 September 2025 00:41:53 +0000 (0:00:01.522) 0:00:11.202 ****** 2025-09-19 00:45:23.237982 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.237993 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.238003 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.238014 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.238084 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.238095 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.238106 | orchestrator | 2025-09-19 00:45:23.238237 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-19 00:45:23.238250 | orchestrator | Friday 19 September 2025 00:41:54 +0000 (0:00:00.925) 0:00:12.127 ****** 2025-09-19 00:45:23.238261 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.238272 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.238282 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.238293 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.238304 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.238315 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.238326 | orchestrator | 2025-09-19 00:45:23.238336 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-19 00:45:23.238347 | orchestrator | Friday 19 September 2025 00:42:00 +0000 (0:00:06.014) 0:00:18.141 ****** 2025-09-19 00:45:23.238358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.238369 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.238379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.238390 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.238401 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.238412 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.238423 | orchestrator | 2025-09-19 00:45:23.238433 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-19 00:45:23.238444 | orchestrator | Friday 19 September 2025 00:42:02 +0000 (0:00:01.800) 0:00:19.942 ****** 2025-09-19 00:45:23.238455 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.238466 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.238477 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.238488 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.238498 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.238509 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.238520 | orchestrator | 2025-09-19 00:45:23.238531 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-19 00:45:23.238551 | orchestrator | Friday 19 September 2025 00:42:04 +0000 (0:00:02.053) 0:00:21.996 ****** 2025-09-19 00:45:23.238562 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.238573 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.238584 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.238594 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.238605 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.238616 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.238626 | orchestrator | 2025-09-19 00:45:23.238637 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-19 00:45:23.238648 | orchestrator | Friday 19 September 2025 00:42:05 +0000 (0:00:01.345) 0:00:23.341 ****** 2025-09-19 00:45:23.238659 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-19 00:45:23.238670 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-19 00:45:23.238681 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-19 00:45:23.238838 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-19 00:45:23.238853 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-19 00:45:23.238863 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-19 00:45:23.238874 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-19 00:45:23.238885 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-19 00:45:23.238895 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-19 00:45:23.238906 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-19 00:45:23.238917 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-19 00:45:23.238928 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-19 00:45:23.238938 | orchestrator | 2025-09-19 00:45:23.238949 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-19 00:45:23.238960 | orchestrator | Friday 19 September 2025 00:42:07 +0000 (0:00:01.647) 0:00:24.989 ****** 2025-09-19 00:45:23.238971 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.238982 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.238993 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.239003 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.239014 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.239025 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.239035 | orchestrator | 2025-09-19 00:45:23.239056 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-19 00:45:23.239068 | orchestrator | 2025-09-19 00:45:23.239079 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-19 00:45:23.239089 | orchestrator | Friday 19 September 2025 00:42:09 +0000 (0:00:02.028) 0:00:27.018 ****** 2025-09-19 00:45:23.239100 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.239111 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.239122 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.239133 | orchestrator | 2025-09-19 00:45:23.239143 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-19 00:45:23.239154 | orchestrator | Friday 19 September 2025 00:42:10 +0000 (0:00:01.011) 0:00:28.029 ****** 2025-09-19 00:45:23.239165 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.239175 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.239186 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.239197 | orchestrator | 2025-09-19 00:45:23.239214 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-19 00:45:23.239226 | orchestrator | Friday 19 September 2025 00:42:11 +0000 (0:00:01.130) 0:00:29.159 ****** 2025-09-19 00:45:23.239299 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.239314 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.239401 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.239418 | orchestrator | 2025-09-19 00:45:23.239429 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-19 00:45:23.239440 | orchestrator | Friday 19 September 2025 00:42:13 +0000 (0:00:01.311) 0:00:30.471 ****** 2025-09-19 00:45:23.239460 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.239471 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.239482 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.239493 | orchestrator | 2025-09-19 00:45:23.239504 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-19 00:45:23.239514 | orchestrator | Friday 19 September 2025 00:42:14 +0000 (0:00:01.286) 0:00:31.758 ****** 2025-09-19 00:45:23.239525 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.239536 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.239547 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.239557 | orchestrator | 2025-09-19 00:45:23.239568 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-19 00:45:23.239579 | orchestrator | Friday 19 September 2025 00:42:15 +0000 (0:00:00.725) 0:00:32.483 ****** 2025-09-19 00:45:23.239590 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.239601 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.239612 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.239622 | orchestrator | 2025-09-19 00:45:23.239633 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-19 00:45:23.239644 | orchestrator | Friday 19 September 2025 00:42:15 +0000 (0:00:00.835) 0:00:33.319 ****** 2025-09-19 00:45:23.239655 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.239665 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.239676 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.239687 | orchestrator | 2025-09-19 00:45:23.239698 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-19 00:45:23.239709 | orchestrator | Friday 19 September 2025 00:42:17 +0000 (0:00:01.514) 0:00:34.833 ****** 2025-09-19 00:45:23.239772 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:45:23.239790 | orchestrator | 2025-09-19 00:45:23.239809 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-19 00:45:23.239830 | orchestrator | Friday 19 September 2025 00:42:18 +0000 (0:00:00.719) 0:00:35.552 ****** 2025-09-19 00:45:23.239851 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.239870 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.239890 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.239909 | orchestrator | 2025-09-19 00:45:23.239927 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-19 00:45:23.239945 | orchestrator | Friday 19 September 2025 00:42:20 +0000 (0:00:02.152) 0:00:37.704 ****** 2025-09-19 00:45:23.239963 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.239983 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.240003 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.240022 | orchestrator | 2025-09-19 00:45:23.240045 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-19 00:45:23.240067 | orchestrator | Friday 19 September 2025 00:42:21 +0000 (0:00:00.813) 0:00:38.518 ****** 2025-09-19 00:45:23.240086 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.240105 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.240123 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.240143 | orchestrator | 2025-09-19 00:45:23.240162 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-19 00:45:23.240183 | orchestrator | Friday 19 September 2025 00:42:22 +0000 (0:00:01.099) 0:00:39.618 ****** 2025-09-19 00:45:23.240201 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.240219 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.240236 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.240253 | orchestrator | 2025-09-19 00:45:23.240273 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-19 00:45:23.240293 | orchestrator | Friday 19 September 2025 00:42:24 +0000 (0:00:02.177) 0:00:41.795 ****** 2025-09-19 00:45:23.240311 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.240326 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.240347 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.240358 | orchestrator | 2025-09-19 00:45:23.240369 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-19 00:45:23.240381 | orchestrator | Friday 19 September 2025 00:42:24 +0000 (0:00:00.497) 0:00:42.293 ****** 2025-09-19 00:45:23.240391 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.240401 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.240410 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.240420 | orchestrator | 2025-09-19 00:45:23.240429 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-19 00:45:23.240438 | orchestrator | Friday 19 September 2025 00:42:25 +0000 (0:00:00.540) 0:00:42.834 ****** 2025-09-19 00:45:23.240449 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.240458 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.240468 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.240477 | orchestrator | 2025-09-19 00:45:23.240498 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-19 00:45:23.240509 | orchestrator | Friday 19 September 2025 00:42:27 +0000 (0:00:01.892) 0:00:44.726 ****** 2025-09-19 00:45:23.240518 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 00:45:23.240529 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 00:45:23.240545 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 00:45:23.240555 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 00:45:23.240565 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 00:45:23.240574 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 00:45:23.240584 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 00:45:23.240593 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 00:45:23.240603 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 00:45:23.240612 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 00:45:23.240622 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 00:45:23.240631 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 00:45:23.240641 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 00:45:23.240650 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 00:45:23.240660 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 00:45:23.240669 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.240679 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.240688 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.240698 | orchestrator | 2025-09-19 00:45:23.240707 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-19 00:45:23.240752 | orchestrator | Friday 19 September 2025 00:43:22 +0000 (0:00:55.554) 0:01:40.281 ****** 2025-09-19 00:45:23.240770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.240786 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.240801 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.240815 | orchestrator | 2025-09-19 00:45:23.240830 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-19 00:45:23.240845 | orchestrator | Friday 19 September 2025 00:43:23 +0000 (0:00:00.315) 0:01:40.596 ****** 2025-09-19 00:45:23.240861 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.240876 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.240891 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.240907 | orchestrator | 2025-09-19 00:45:23.240925 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-19 00:45:23.240940 | orchestrator | Friday 19 September 2025 00:43:24 +0000 (0:00:01.345) 0:01:41.942 ****** 2025-09-19 00:45:23.240958 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.240974 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.240992 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.241002 | orchestrator | 2025-09-19 00:45:23.241012 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-19 00:45:23.241021 | orchestrator | Friday 19 September 2025 00:43:25 +0000 (0:00:01.393) 0:01:43.335 ****** 2025-09-19 00:45:23.241031 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.241040 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.241049 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.241058 | orchestrator | 2025-09-19 00:45:23.241068 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-19 00:45:23.241077 | orchestrator | Friday 19 September 2025 00:43:52 +0000 (0:00:26.321) 0:02:09.656 ****** 2025-09-19 00:45:23.241087 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.241096 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.241105 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.241115 | orchestrator | 2025-09-19 00:45:23.241124 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-19 00:45:23.241134 | orchestrator | Friday 19 September 2025 00:43:52 +0000 (0:00:00.605) 0:02:10.262 ****** 2025-09-19 00:45:23.241143 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.241152 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.241162 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.241171 | orchestrator | 2025-09-19 00:45:23.241188 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-19 00:45:23.241198 | orchestrator | Friday 19 September 2025 00:43:53 +0000 (0:00:00.733) 0:02:10.995 ****** 2025-09-19 00:45:23.241208 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.241217 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.241227 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.241236 | orchestrator | 2025-09-19 00:45:23.241245 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-19 00:45:23.241255 | orchestrator | Friday 19 September 2025 00:43:54 +0000 (0:00:00.605) 0:02:11.600 ****** 2025-09-19 00:45:23.241264 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.241274 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.241283 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.241293 | orchestrator | 2025-09-19 00:45:23.241302 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-19 00:45:23.241312 | orchestrator | Friday 19 September 2025 00:43:54 +0000 (0:00:00.582) 0:02:12.183 ****** 2025-09-19 00:45:23.241321 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.241330 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.241340 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.241349 | orchestrator | 2025-09-19 00:45:23.241359 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-19 00:45:23.241368 | orchestrator | Friday 19 September 2025 00:43:55 +0000 (0:00:00.282) 0:02:12.466 ****** 2025-09-19 00:45:23.241385 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.241395 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.241404 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.241414 | orchestrator | 2025-09-19 00:45:23.241423 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-19 00:45:23.241433 | orchestrator | Friday 19 September 2025 00:43:55 +0000 (0:00:00.547) 0:02:13.013 ****** 2025-09-19 00:45:23.241442 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.241452 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.241461 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.241470 | orchestrator | 2025-09-19 00:45:23.241480 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-19 00:45:23.241489 | orchestrator | Friday 19 September 2025 00:43:56 +0000 (0:00:00.926) 0:02:13.939 ****** 2025-09-19 00:45:23.241499 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.241508 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.241518 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.241527 | orchestrator | 2025-09-19 00:45:23.241537 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-19 00:45:23.242098 | orchestrator | Friday 19 September 2025 00:43:57 +0000 (0:00:00.946) 0:02:14.886 ****** 2025-09-19 00:45:23.242119 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:45:23.242129 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:45:23.242138 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:45:23.242148 | orchestrator | 2025-09-19 00:45:23.242157 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-19 00:45:23.242167 | orchestrator | Friday 19 September 2025 00:43:58 +0000 (0:00:00.891) 0:02:15.777 ****** 2025-09-19 00:45:23.242176 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.242186 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.242195 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.242204 | orchestrator | 2025-09-19 00:45:23.242226 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-19 00:45:23.242236 | orchestrator | Friday 19 September 2025 00:43:58 +0000 (0:00:00.536) 0:02:16.314 ****** 2025-09-19 00:45:23.242246 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.242255 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.242264 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.242274 | orchestrator | 2025-09-19 00:45:23.242283 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-19 00:45:23.242293 | orchestrator | Friday 19 September 2025 00:43:59 +0000 (0:00:00.367) 0:02:16.682 ****** 2025-09-19 00:45:23.242302 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.242312 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.242321 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.242331 | orchestrator | 2025-09-19 00:45:23.242340 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-19 00:45:23.242350 | orchestrator | Friday 19 September 2025 00:44:00 +0000 (0:00:00.779) 0:02:17.461 ****** 2025-09-19 00:45:23.242359 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.242369 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.242378 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.242387 | orchestrator | 2025-09-19 00:45:23.242397 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-19 00:45:23.242407 | orchestrator | Friday 19 September 2025 00:44:00 +0000 (0:00:00.740) 0:02:18.201 ****** 2025-09-19 00:45:23.242417 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 00:45:23.242426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 00:45:23.242435 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 00:45:23.242445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 00:45:23.242462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 00:45:23.242472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 00:45:23.242482 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 00:45:23.242491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 00:45:23.242501 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 00:45:23.242518 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-19 00:45:23.242528 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 00:45:23.242538 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 00:45:23.242547 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-19 00:45:23.242557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 00:45:23.242567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 00:45:23.242576 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 00:45:23.242585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 00:45:23.242595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 00:45:23.242605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 00:45:23.242614 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 00:45:23.242623 | orchestrator | 2025-09-19 00:45:23.242633 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-19 00:45:23.242642 | orchestrator | 2025-09-19 00:45:23.242652 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-19 00:45:23.242661 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:03.101) 0:02:21.303 ****** 2025-09-19 00:45:23.242671 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.242681 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.242690 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.242700 | orchestrator | 2025-09-19 00:45:23.242709 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-19 00:45:23.242733 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:00.297) 0:02:21.600 ****** 2025-09-19 00:45:23.242743 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.242752 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.242762 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.242771 | orchestrator | 2025-09-19 00:45:23.242780 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-19 00:45:23.242790 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:00.623) 0:02:22.224 ****** 2025-09-19 00:45:23.242799 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.242809 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.242818 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.242828 | orchestrator | 2025-09-19 00:45:23.242838 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-19 00:45:23.242847 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:00.299) 0:02:22.524 ****** 2025-09-19 00:45:23.242861 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:45:23.242871 | orchestrator | 2025-09-19 00:45:23.242881 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-19 00:45:23.242896 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:00.508) 0:02:23.033 ****** 2025-09-19 00:45:23.242906 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.242915 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.242925 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.242934 | orchestrator | 2025-09-19 00:45:23.242944 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-19 00:45:23.242953 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:00.253) 0:02:23.286 ****** 2025-09-19 00:45:23.242963 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.242972 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.242982 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.242991 | orchestrator | 2025-09-19 00:45:23.243001 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-19 00:45:23.243010 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:00.293) 0:02:23.580 ****** 2025-09-19 00:45:23.243020 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.243029 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.243039 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.243048 | orchestrator | 2025-09-19 00:45:23.243057 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-19 00:45:23.243067 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:00.552) 0:02:24.132 ****** 2025-09-19 00:45:23.243077 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.243086 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.243096 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.243105 | orchestrator | 2025-09-19 00:45:23.243114 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-19 00:45:23.243124 | orchestrator | Friday 19 September 2025 00:44:07 +0000 (0:00:00.638) 0:02:24.771 ****** 2025-09-19 00:45:23.243133 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.243143 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.243152 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.243162 | orchestrator | 2025-09-19 00:45:23.243171 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-19 00:45:23.243181 | orchestrator | Friday 19 September 2025 00:44:08 +0000 (0:00:01.159) 0:02:25.930 ****** 2025-09-19 00:45:23.243190 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.243199 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.243209 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.243218 | orchestrator | 2025-09-19 00:45:23.243228 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-19 00:45:23.243237 | orchestrator | Friday 19 September 2025 00:44:10 +0000 (0:00:01.474) 0:02:27.405 ****** 2025-09-19 00:45:23.243247 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:45:23.243257 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:45:23.243266 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:45:23.243275 | orchestrator | 2025-09-19 00:45:23.243290 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 00:45:23.243300 | orchestrator | 2025-09-19 00:45:23.243309 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 00:45:23.243319 | orchestrator | Friday 19 September 2025 00:44:22 +0000 (0:00:12.155) 0:02:39.561 ****** 2025-09-19 00:45:23.243328 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.243338 | orchestrator | 2025-09-19 00:45:23.243348 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 00:45:23.243357 | orchestrator | Friday 19 September 2025 00:44:23 +0000 (0:00:01.414) 0:02:40.975 ****** 2025-09-19 00:45:23.243367 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.243376 | orchestrator | 2025-09-19 00:45:23.243386 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 00:45:23.243395 | orchestrator | Friday 19 September 2025 00:44:24 +0000 (0:00:00.404) 0:02:41.380 ****** 2025-09-19 00:45:23.243405 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 00:45:23.243420 | orchestrator | 2025-09-19 00:45:23.243429 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 00:45:23.243439 | orchestrator | Friday 19 September 2025 00:44:24 +0000 (0:00:00.578) 0:02:41.958 ****** 2025-09-19 00:45:23.243448 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.243458 | orchestrator | 2025-09-19 00:45:23.243467 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 00:45:23.243476 | orchestrator | Friday 19 September 2025 00:44:25 +0000 (0:00:00.683) 0:02:42.641 ****** 2025-09-19 00:45:23.243486 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.243495 | orchestrator | 2025-09-19 00:45:23.243505 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 00:45:23.243515 | orchestrator | Friday 19 September 2025 00:44:26 +0000 (0:00:00.773) 0:02:43.415 ****** 2025-09-19 00:45:23.243524 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 00:45:23.243533 | orchestrator | 2025-09-19 00:45:23.243543 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 00:45:23.243552 | orchestrator | Friday 19 September 2025 00:44:27 +0000 (0:00:01.320) 0:02:44.736 ****** 2025-09-19 00:45:23.243562 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 00:45:23.243571 | orchestrator | 2025-09-19 00:45:23.243581 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 00:45:23.243590 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.673) 0:02:45.410 ****** 2025-09-19 00:45:23.243600 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.243609 | orchestrator | 2025-09-19 00:45:23.243619 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 00:45:23.243628 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.371) 0:02:45.782 ****** 2025-09-19 00:45:23.243638 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.243648 | orchestrator | 2025-09-19 00:45:23.243657 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-19 00:45:23.243667 | orchestrator | 2025-09-19 00:45:23.243677 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-19 00:45:23.243690 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.395) 0:02:46.177 ****** 2025-09-19 00:45:23.243700 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.243709 | orchestrator | 2025-09-19 00:45:23.243771 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-19 00:45:23.243783 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.163) 0:02:46.341 ****** 2025-09-19 00:45:23.243792 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 00:45:23.243802 | orchestrator | 2025-09-19 00:45:23.243811 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-19 00:45:23.243821 | orchestrator | Friday 19 September 2025 00:44:29 +0000 (0:00:00.226) 0:02:46.567 ****** 2025-09-19 00:45:23.243830 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.243840 | orchestrator | 2025-09-19 00:45:23.243849 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-19 00:45:23.243859 | orchestrator | Friday 19 September 2025 00:44:30 +0000 (0:00:00.833) 0:02:47.401 ****** 2025-09-19 00:45:23.243868 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.243878 | orchestrator | 2025-09-19 00:45:23.243887 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-19 00:45:23.243897 | orchestrator | Friday 19 September 2025 00:44:31 +0000 (0:00:01.706) 0:02:49.107 ****** 2025-09-19 00:45:23.243906 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.243916 | orchestrator | 2025-09-19 00:45:23.243925 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-19 00:45:23.243934 | orchestrator | Friday 19 September 2025 00:44:32 +0000 (0:00:00.936) 0:02:50.043 ****** 2025-09-19 00:45:23.243944 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.243953 | orchestrator | 2025-09-19 00:45:23.243963 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-19 00:45:23.243982 | orchestrator | Friday 19 September 2025 00:44:33 +0000 (0:00:00.513) 0:02:50.557 ****** 2025-09-19 00:45:23.243991 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.244001 | orchestrator | 2025-09-19 00:45:23.244010 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-19 00:45:23.244034 | orchestrator | Friday 19 September 2025 00:44:39 +0000 (0:00:06.579) 0:02:57.136 ****** 2025-09-19 00:45:23.244042 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.244049 | orchestrator | 2025-09-19 00:45:23.244057 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-19 00:45:23.244065 | orchestrator | Friday 19 September 2025 00:44:51 +0000 (0:00:11.598) 0:03:08.735 ****** 2025-09-19 00:45:23.244073 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.244081 | orchestrator | 2025-09-19 00:45:23.244088 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-19 00:45:23.244096 | orchestrator | 2025-09-19 00:45:23.244104 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-19 00:45:23.244117 | orchestrator | Friday 19 September 2025 00:44:51 +0000 (0:00:00.461) 0:03:09.196 ****** 2025-09-19 00:45:23.244125 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.244133 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.244141 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.244148 | orchestrator | 2025-09-19 00:45:23.244156 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-19 00:45:23.244164 | orchestrator | Friday 19 September 2025 00:44:52 +0000 (0:00:00.276) 0:03:09.472 ****** 2025-09-19 00:45:23.244172 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244180 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.244187 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.244195 | orchestrator | 2025-09-19 00:45:23.244203 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-19 00:45:23.244211 | orchestrator | Friday 19 September 2025 00:44:52 +0000 (0:00:00.503) 0:03:09.976 ****** 2025-09-19 00:45:23.244219 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:45:23.244226 | orchestrator | 2025-09-19 00:45:23.244234 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-19 00:45:23.244242 | orchestrator | Friday 19 September 2025 00:44:53 +0000 (0:00:00.525) 0:03:10.501 ****** 2025-09-19 00:45:23.244250 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244257 | orchestrator | 2025-09-19 00:45:23.244265 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-19 00:45:23.244273 | orchestrator | Friday 19 September 2025 00:44:53 +0000 (0:00:00.184) 0:03:10.686 ****** 2025-09-19 00:45:23.244281 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244289 | orchestrator | 2025-09-19 00:45:23.244296 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-19 00:45:23.244304 | orchestrator | Friday 19 September 2025 00:44:53 +0000 (0:00:00.184) 0:03:10.871 ****** 2025-09-19 00:45:23.244312 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244319 | orchestrator | 2025-09-19 00:45:23.244327 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-19 00:45:23.244335 | orchestrator | Friday 19 September 2025 00:44:53 +0000 (0:00:00.177) 0:03:11.048 ****** 2025-09-19 00:45:23.244343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244350 | orchestrator | 2025-09-19 00:45:23.244358 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-19 00:45:23.244366 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:00.500) 0:03:11.549 ****** 2025-09-19 00:45:23.244373 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244381 | orchestrator | 2025-09-19 00:45:23.244389 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-19 00:45:23.244397 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:00.300) 0:03:11.849 ****** 2025-09-19 00:45:23.244410 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244418 | orchestrator | 2025-09-19 00:45:23.244425 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-19 00:45:23.244433 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:00.205) 0:03:12.055 ****** 2025-09-19 00:45:23.244441 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244449 | orchestrator | 2025-09-19 00:45:23.244461 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-19 00:45:23.244469 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.317) 0:03:12.372 ****** 2025-09-19 00:45:23.244477 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244484 | orchestrator | 2025-09-19 00:45:23.244492 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-19 00:45:23.244500 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.260) 0:03:12.632 ****** 2025-09-19 00:45:23.244508 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244516 | orchestrator | 2025-09-19 00:45:23.244524 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-19 00:45:23.244532 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.281) 0:03:12.913 ****** 2025-09-19 00:45:23.244540 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-19 00:45:23.244547 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-19 00:45:23.244555 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244563 | orchestrator | 2025-09-19 00:45:23.244571 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-19 00:45:23.244579 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.299) 0:03:13.213 ****** 2025-09-19 00:45:23.244587 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244594 | orchestrator | 2025-09-19 00:45:23.244602 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-19 00:45:23.244610 | orchestrator | Friday 19 September 2025 00:44:56 +0000 (0:00:00.199) 0:03:13.412 ****** 2025-09-19 00:45:23.244618 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244626 | orchestrator | 2025-09-19 00:45:23.244633 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-19 00:45:23.244641 | orchestrator | Friday 19 September 2025 00:44:56 +0000 (0:00:00.233) 0:03:13.646 ****** 2025-09-19 00:45:23.244649 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244657 | orchestrator | 2025-09-19 00:45:23.244665 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-19 00:45:23.244672 | orchestrator | Friday 19 September 2025 00:44:56 +0000 (0:00:00.203) 0:03:13.850 ****** 2025-09-19 00:45:23.244680 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244688 | orchestrator | 2025-09-19 00:45:23.244695 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-19 00:45:23.244703 | orchestrator | Friday 19 September 2025 00:44:56 +0000 (0:00:00.194) 0:03:14.045 ****** 2025-09-19 00:45:23.244711 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244731 | orchestrator | 2025-09-19 00:45:23.244739 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-19 00:45:23.244747 | orchestrator | Friday 19 September 2025 00:44:56 +0000 (0:00:00.194) 0:03:14.240 ****** 2025-09-19 00:45:23.244755 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244763 | orchestrator | 2025-09-19 00:45:23.244771 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-19 00:45:23.244783 | orchestrator | Friday 19 September 2025 00:44:57 +0000 (0:00:00.610) 0:03:14.850 ****** 2025-09-19 00:45:23.244791 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244799 | orchestrator | 2025-09-19 00:45:23.244807 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-19 00:45:23.244815 | orchestrator | Friday 19 September 2025 00:44:57 +0000 (0:00:00.218) 0:03:15.068 ****** 2025-09-19 00:45:23.244823 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244835 | orchestrator | 2025-09-19 00:45:23.244843 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-19 00:45:23.244851 | orchestrator | Friday 19 September 2025 00:44:57 +0000 (0:00:00.189) 0:03:15.258 ****** 2025-09-19 00:45:23.244859 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244866 | orchestrator | 2025-09-19 00:45:23.244874 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-19 00:45:23.244882 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:00.190) 0:03:15.448 ****** 2025-09-19 00:45:23.244890 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244897 | orchestrator | 2025-09-19 00:45:23.244905 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-19 00:45:23.244913 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:00.189) 0:03:15.638 ****** 2025-09-19 00:45:23.244921 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.244928 | orchestrator | 2025-09-19 00:45:23.244936 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-19 00:45:23.244944 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:00.177) 0:03:15.816 ****** 2025-09-19 00:45:23.244952 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-19 00:45:23.244970 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-19 00:45:23.244979 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-19 00:45:23.244987 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-19 00:45:23.244995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245003 | orchestrator | 2025-09-19 00:45:23.245011 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-19 00:45:23.245018 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:00.416) 0:03:16.232 ****** 2025-09-19 00:45:23.245026 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245034 | orchestrator | 2025-09-19 00:45:23.245042 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-19 00:45:23.245050 | orchestrator | Friday 19 September 2025 00:44:59 +0000 (0:00:00.312) 0:03:16.545 ****** 2025-09-19 00:45:23.245057 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245065 | orchestrator | 2025-09-19 00:45:23.245073 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-19 00:45:23.245081 | orchestrator | Friday 19 September 2025 00:44:59 +0000 (0:00:00.213) 0:03:16.759 ****** 2025-09-19 00:45:23.245089 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245097 | orchestrator | 2025-09-19 00:45:23.245104 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-19 00:45:23.245115 | orchestrator | Friday 19 September 2025 00:44:59 +0000 (0:00:00.249) 0:03:17.008 ****** 2025-09-19 00:45:23.245123 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245131 | orchestrator | 2025-09-19 00:45:23.245139 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-19 00:45:23.245147 | orchestrator | Friday 19 September 2025 00:44:59 +0000 (0:00:00.235) 0:03:17.244 ****** 2025-09-19 00:45:23.245155 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-19 00:45:23.245163 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-19 00:45:23.245171 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245178 | orchestrator | 2025-09-19 00:45:23.245186 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-19 00:45:23.245194 | orchestrator | Friday 19 September 2025 00:45:00 +0000 (0:00:00.570) 0:03:17.815 ****** 2025-09-19 00:45:23.245202 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245210 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.245217 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.245225 | orchestrator | 2025-09-19 00:45:23.245233 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-19 00:45:23.245245 | orchestrator | Friday 19 September 2025 00:45:00 +0000 (0:00:00.508) 0:03:18.323 ****** 2025-09-19 00:45:23.245253 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.245261 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.245269 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.245277 | orchestrator | 2025-09-19 00:45:23.245285 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-19 00:45:23.245292 | orchestrator | 2025-09-19 00:45:23.245300 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-19 00:45:23.245308 | orchestrator | Friday 19 September 2025 00:45:01 +0000 (0:00:00.854) 0:03:19.178 ****** 2025-09-19 00:45:23.245316 | orchestrator | ok: [testbed-manager] 2025-09-19 00:45:23.245324 | orchestrator | 2025-09-19 00:45:23.245331 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-19 00:45:23.245339 | orchestrator | Friday 19 September 2025 00:45:01 +0000 (0:00:00.098) 0:03:19.276 ****** 2025-09-19 00:45:23.245347 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 00:45:23.245355 | orchestrator | 2025-09-19 00:45:23.245363 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-19 00:45:23.245370 | orchestrator | Friday 19 September 2025 00:45:02 +0000 (0:00:00.322) 0:03:19.598 ****** 2025-09-19 00:45:23.245378 | orchestrator | changed: [testbed-manager] 2025-09-19 00:45:23.245386 | orchestrator | 2025-09-19 00:45:23.245394 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-19 00:45:23.245402 | orchestrator | 2025-09-19 00:45:23.245410 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-19 00:45:23.245422 | orchestrator | Friday 19 September 2025 00:45:07 +0000 (0:00:04.841) 0:03:24.440 ****** 2025-09-19 00:45:23.245430 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:45:23.245438 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:45:23.245445 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:45:23.245453 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:45:23.245461 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:45:23.245469 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:45:23.245477 | orchestrator | 2025-09-19 00:45:23.245485 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-19 00:45:23.245492 | orchestrator | Friday 19 September 2025 00:45:07 +0000 (0:00:00.593) 0:03:25.033 ****** 2025-09-19 00:45:23.245500 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 00:45:23.245508 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 00:45:23.245516 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 00:45:23.245523 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 00:45:23.245531 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 00:45:23.245539 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 00:45:23.245546 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 00:45:23.245554 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 00:45:23.245562 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 00:45:23.245579 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 00:45:23.245587 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 00:45:23.245595 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 00:45:23.245602 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 00:45:23.245610 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 00:45:23.245622 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 00:45:23.245630 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 00:45:23.245638 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 00:45:23.245646 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 00:45:23.245654 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 00:45:23.245665 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 00:45:23.245673 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 00:45:23.245681 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 00:45:23.245689 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 00:45:23.245696 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 00:45:23.245704 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 00:45:23.245712 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 00:45:23.245732 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 00:45:23.245741 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 00:45:23.245748 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 00:45:23.245756 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 00:45:23.245764 | orchestrator | 2025-09-19 00:45:23.245771 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-19 00:45:23.245779 | orchestrator | Friday 19 September 2025 00:45:19 +0000 (0:00:12.123) 0:03:37.157 ****** 2025-09-19 00:45:23.245787 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.245795 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.245802 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.245810 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245818 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.245825 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.245842 | orchestrator | 2025-09-19 00:45:23.245850 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-19 00:45:23.245858 | orchestrator | Friday 19 September 2025 00:45:20 +0000 (0:00:00.526) 0:03:37.683 ****** 2025-09-19 00:45:23.245865 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:45:23.245873 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:45:23.245881 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:45:23.245888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:45:23.245896 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:45:23.245904 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:45:23.245911 | orchestrator | 2025-09-19 00:45:23.245919 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:45:23.245932 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:45:23.245941 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-19 00:45:23.245949 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 00:45:23.245956 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 00:45:23.245969 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 00:45:23.245977 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 00:45:23.245984 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 00:45:23.245992 | orchestrator | 2025-09-19 00:45:23.246000 | orchestrator | 2025-09-19 00:45:23.246008 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:45:23.246034 | orchestrator | Friday 19 September 2025 00:45:20 +0000 (0:00:00.533) 0:03:38.217 ****** 2025-09-19 00:45:23.246044 | orchestrator | =============================================================================== 2025-09-19 00:45:23.246052 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.55s 2025-09-19 00:45:23.246060 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.32s 2025-09-19 00:45:23.246067 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.16s 2025-09-19 00:45:23.246075 | orchestrator | Manage labels ---------------------------------------------------------- 12.12s 2025-09-19 00:45:23.246083 | orchestrator | kubectl : Install required packages ------------------------------------ 11.60s 2025-09-19 00:45:23.246091 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.58s 2025-09-19 00:45:23.246098 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.01s 2025-09-19 00:45:23.246106 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.84s 2025-09-19 00:45:23.246114 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.10s 2025-09-19 00:45:23.246122 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.18s 2025-09-19 00:45:23.246133 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.16s 2025-09-19 00:45:23.246141 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.15s 2025-09-19 00:45:23.246149 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.05s 2025-09-19 00:45:23.246156 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.03s 2025-09-19 00:45:23.246164 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.89s 2025-09-19 00:45:23.246172 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.80s 2025-09-19 00:45:23.246179 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.71s 2025-09-19 00:45:23.246187 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.65s 2025-09-19 00:45:23.246195 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.63s 2025-09-19 00:45:23.246203 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.52s 2025-09-19 00:45:23.246210 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:23.246218 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task a50cf1a5-718a-41bb-82a9-d456dfdd6c11 is in state STARTED 2025-09-19 00:45:23.246226 | orchestrator | 2025-09-19 00:45:23 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:23.246234 | orchestrator | 2025-09-19 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:26.269602 | orchestrator | 2025-09-19 00:45:26 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:26.269776 | orchestrator | 2025-09-19 00:45:26 | INFO  | Task c32475ea-705b-4351-a3ca-f2a145dfdeff is in state STARTED 2025-09-19 00:45:26.271670 | orchestrator | 2025-09-19 00:45:26 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:26.273400 | orchestrator | 2025-09-19 00:45:26 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:26.277109 | orchestrator | 2025-09-19 00:45:26 | INFO  | Task a50cf1a5-718a-41bb-82a9-d456dfdd6c11 is in state STARTED 2025-09-19 00:45:26.277170 | orchestrator | 2025-09-19 00:45:26 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:26.277210 | orchestrator | 2025-09-19 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:29.314078 | orchestrator | 2025-09-19 00:45:29 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:29.314279 | orchestrator | 2025-09-19 00:45:29 | INFO  | Task c32475ea-705b-4351-a3ca-f2a145dfdeff is in state STARTED 2025-09-19 00:45:29.315032 | orchestrator | 2025-09-19 00:45:29 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:29.317630 | orchestrator | 2025-09-19 00:45:29 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:29.317979 | orchestrator | 2025-09-19 00:45:29 | INFO  | Task a50cf1a5-718a-41bb-82a9-d456dfdd6c11 is in state SUCCESS 2025-09-19 00:45:29.318649 | orchestrator | 2025-09-19 00:45:29 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:29.318685 | orchestrator | 2025-09-19 00:45:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:32.344822 | orchestrator | 2025-09-19 00:45:32 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:32.346196 | orchestrator | 2025-09-19 00:45:32 | INFO  | Task c32475ea-705b-4351-a3ca-f2a145dfdeff is in state SUCCESS 2025-09-19 00:45:32.346597 | orchestrator | 2025-09-19 00:45:32 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:32.347143 | orchestrator | 2025-09-19 00:45:32 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:32.347800 | orchestrator | 2025-09-19 00:45:32 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:32.347826 | orchestrator | 2025-09-19 00:45:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:35.395100 | orchestrator | 2025-09-19 00:45:35 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:35.397001 | orchestrator | 2025-09-19 00:45:35 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:35.398726 | orchestrator | 2025-09-19 00:45:35 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:35.400428 | orchestrator | 2025-09-19 00:45:35 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:35.400509 | orchestrator | 2025-09-19 00:45:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:38.433543 | orchestrator | 2025-09-19 00:45:38 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:38.433643 | orchestrator | 2025-09-19 00:45:38 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:38.434696 | orchestrator | 2025-09-19 00:45:38 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:38.436810 | orchestrator | 2025-09-19 00:45:38 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:38.436840 | orchestrator | 2025-09-19 00:45:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:41.481636 | orchestrator | 2025-09-19 00:45:41 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:41.481840 | orchestrator | 2025-09-19 00:45:41 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:41.482823 | orchestrator | 2025-09-19 00:45:41 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:41.483991 | orchestrator | 2025-09-19 00:45:41 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:41.484019 | orchestrator | 2025-09-19 00:45:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:44.535535 | orchestrator | 2025-09-19 00:45:44 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:44.539968 | orchestrator | 2025-09-19 00:45:44 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:44.542608 | orchestrator | 2025-09-19 00:45:44 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:44.545259 | orchestrator | 2025-09-19 00:45:44 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:44.545713 | orchestrator | 2025-09-19 00:45:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:47.590203 | orchestrator | 2025-09-19 00:45:47 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:47.591880 | orchestrator | 2025-09-19 00:45:47 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:47.593949 | orchestrator | 2025-09-19 00:45:47 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:47.595124 | orchestrator | 2025-09-19 00:45:47 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:47.595288 | orchestrator | 2025-09-19 00:45:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:50.641371 | orchestrator | 2025-09-19 00:45:50 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:50.643202 | orchestrator | 2025-09-19 00:45:50 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:50.644512 | orchestrator | 2025-09-19 00:45:50 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:50.646360 | orchestrator | 2025-09-19 00:45:50 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:50.646385 | orchestrator | 2025-09-19 00:45:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:53.687288 | orchestrator | 2025-09-19 00:45:53 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:53.687362 | orchestrator | 2025-09-19 00:45:53 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:53.687371 | orchestrator | 2025-09-19 00:45:53 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:53.688632 | orchestrator | 2025-09-19 00:45:53 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:53.688648 | orchestrator | 2025-09-19 00:45:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:56.757894 | orchestrator | 2025-09-19 00:45:56 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:56.757988 | orchestrator | 2025-09-19 00:45:56 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:56.760423 | orchestrator | 2025-09-19 00:45:56 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:56.761809 | orchestrator | 2025-09-19 00:45:56 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:56.761844 | orchestrator | 2025-09-19 00:45:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:45:59.851347 | orchestrator | 2025-09-19 00:45:59 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:45:59.852908 | orchestrator | 2025-09-19 00:45:59 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:45:59.853464 | orchestrator | 2025-09-19 00:45:59 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:45:59.855925 | orchestrator | 2025-09-19 00:45:59 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:45:59.855961 | orchestrator | 2025-09-19 00:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:02.902870 | orchestrator | 2025-09-19 00:46:02 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:02.904401 | orchestrator | 2025-09-19 00:46:02 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:02.905703 | orchestrator | 2025-09-19 00:46:02 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:02.907099 | orchestrator | 2025-09-19 00:46:02 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:02.907134 | orchestrator | 2025-09-19 00:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:05.950399 | orchestrator | 2025-09-19 00:46:05 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:05.951975 | orchestrator | 2025-09-19 00:46:05 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:05.954273 | orchestrator | 2025-09-19 00:46:05 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:05.955950 | orchestrator | 2025-09-19 00:46:05 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:05.955984 | orchestrator | 2025-09-19 00:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:09.031083 | orchestrator | 2025-09-19 00:46:09 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:09.036870 | orchestrator | 2025-09-19 00:46:09 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:09.039548 | orchestrator | 2025-09-19 00:46:09 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:09.041814 | orchestrator | 2025-09-19 00:46:09 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:09.041831 | orchestrator | 2025-09-19 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:12.084222 | orchestrator | 2025-09-19 00:46:12 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:12.084331 | orchestrator | 2025-09-19 00:46:12 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:12.085262 | orchestrator | 2025-09-19 00:46:12 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:12.085793 | orchestrator | 2025-09-19 00:46:12 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:12.085817 | orchestrator | 2025-09-19 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:15.142248 | orchestrator | 2025-09-19 00:46:15 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:15.142315 | orchestrator | 2025-09-19 00:46:15 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:15.143013 | orchestrator | 2025-09-19 00:46:15 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:15.146036 | orchestrator | 2025-09-19 00:46:15 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:15.151027 | orchestrator | 2025-09-19 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:18.188121 | orchestrator | 2025-09-19 00:46:18 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:18.189041 | orchestrator | 2025-09-19 00:46:18 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:18.191388 | orchestrator | 2025-09-19 00:46:18 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:18.192635 | orchestrator | 2025-09-19 00:46:18 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:18.192695 | orchestrator | 2025-09-19 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:21.237834 | orchestrator | 2025-09-19 00:46:21 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:21.239541 | orchestrator | 2025-09-19 00:46:21 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:21.241791 | orchestrator | 2025-09-19 00:46:21 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:21.243972 | orchestrator | 2025-09-19 00:46:21 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:21.244061 | orchestrator | 2025-09-19 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:24.282293 | orchestrator | 2025-09-19 00:46:24 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:24.283300 | orchestrator | 2025-09-19 00:46:24 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:24.284444 | orchestrator | 2025-09-19 00:46:24 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:24.286162 | orchestrator | 2025-09-19 00:46:24 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:24.286190 | orchestrator | 2025-09-19 00:46:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:27.329343 | orchestrator | 2025-09-19 00:46:27 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:27.332835 | orchestrator | 2025-09-19 00:46:27 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:27.334114 | orchestrator | 2025-09-19 00:46:27 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:27.335138 | orchestrator | 2025-09-19 00:46:27 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:27.335162 | orchestrator | 2025-09-19 00:46:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:30.375520 | orchestrator | 2025-09-19 00:46:30 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:30.376732 | orchestrator | 2025-09-19 00:46:30 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:30.378212 | orchestrator | 2025-09-19 00:46:30 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:30.380483 | orchestrator | 2025-09-19 00:46:30 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:30.382332 | orchestrator | 2025-09-19 00:46:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:33.420085 | orchestrator | 2025-09-19 00:46:33 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:33.421311 | orchestrator | 2025-09-19 00:46:33 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:33.424427 | orchestrator | 2025-09-19 00:46:33 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:33.425366 | orchestrator | 2025-09-19 00:46:33 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:33.425396 | orchestrator | 2025-09-19 00:46:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:36.459159 | orchestrator | 2025-09-19 00:46:36 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:36.460522 | orchestrator | 2025-09-19 00:46:36 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:36.462287 | orchestrator | 2025-09-19 00:46:36 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:36.464553 | orchestrator | 2025-09-19 00:46:36 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:36.464610 | orchestrator | 2025-09-19 00:46:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:39.499716 | orchestrator | 2025-09-19 00:46:39 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:39.500450 | orchestrator | 2025-09-19 00:46:39 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:39.501998 | orchestrator | 2025-09-19 00:46:39 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:39.503930 | orchestrator | 2025-09-19 00:46:39 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:39.503982 | orchestrator | 2025-09-19 00:46:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:42.540177 | orchestrator | 2025-09-19 00:46:42 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:42.542177 | orchestrator | 2025-09-19 00:46:42 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:42.543759 | orchestrator | 2025-09-19 00:46:42 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state STARTED 2025-09-19 00:46:42.545370 | orchestrator | 2025-09-19 00:46:42 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:42.545696 | orchestrator | 2025-09-19 00:46:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:45.582317 | orchestrator | 2025-09-19 00:46:45 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:45.585191 | orchestrator | 2025-09-19 00:46:45 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:45.587766 | orchestrator | 2025-09-19 00:46:45 | INFO  | Task ad38c34c-74a8-4188-8b56-dce190d31bce is in state SUCCESS 2025-09-19 00:46:45.589117 | orchestrator | 2025-09-19 00:46:45.589155 | orchestrator | 2025-09-19 00:46:45.589168 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-19 00:46:45.589180 | orchestrator | 2025-09-19 00:46:45.589192 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 00:46:45.589204 | orchestrator | Friday 19 September 2025 00:45:25 +0000 (0:00:00.233) 0:00:00.233 ****** 2025-09-19 00:46:45.589215 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 00:46:45.589227 | orchestrator | 2025-09-19 00:46:45.589238 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 00:46:45.589248 | orchestrator | Friday 19 September 2025 00:45:25 +0000 (0:00:00.781) 0:00:01.014 ****** 2025-09-19 00:46:45.589284 | orchestrator | changed: [testbed-manager] 2025-09-19 00:46:45.589458 | orchestrator | 2025-09-19 00:46:45.589475 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-19 00:46:45.589487 | orchestrator | Friday 19 September 2025 00:45:27 +0000 (0:00:01.468) 0:00:02.483 ****** 2025-09-19 00:46:45.589499 | orchestrator | changed: [testbed-manager] 2025-09-19 00:46:45.589510 | orchestrator | 2025-09-19 00:46:45.589521 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:46:45.589534 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:46:45.589547 | orchestrator | 2025-09-19 00:46:45.589558 | orchestrator | 2025-09-19 00:46:45.589570 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:46:45.589581 | orchestrator | Friday 19 September 2025 00:45:27 +0000 (0:00:00.661) 0:00:03.144 ****** 2025-09-19 00:46:45.589592 | orchestrator | =============================================================================== 2025-09-19 00:46:45.589604 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2025-09-19 00:46:45.589659 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-09-19 00:46:45.589672 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.66s 2025-09-19 00:46:45.589683 | orchestrator | 2025-09-19 00:46:45.589694 | orchestrator | 2025-09-19 00:46:45.589704 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 00:46:45.589715 | orchestrator | 2025-09-19 00:46:45.589726 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 00:46:45.589737 | orchestrator | Friday 19 September 2025 00:45:24 +0000 (0:00:00.185) 0:00:00.185 ****** 2025-09-19 00:46:45.589748 | orchestrator | ok: [testbed-manager] 2025-09-19 00:46:45.589760 | orchestrator | 2025-09-19 00:46:45.589771 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 00:46:45.589782 | orchestrator | Friday 19 September 2025 00:45:25 +0000 (0:00:00.571) 0:00:00.757 ****** 2025-09-19 00:46:45.589793 | orchestrator | ok: [testbed-manager] 2025-09-19 00:46:45.589804 | orchestrator | 2025-09-19 00:46:45.589815 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 00:46:45.589825 | orchestrator | Friday 19 September 2025 00:45:26 +0000 (0:00:00.575) 0:00:01.332 ****** 2025-09-19 00:46:45.589836 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 00:46:45.589847 | orchestrator | 2025-09-19 00:46:45.589858 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 00:46:45.589868 | orchestrator | Friday 19 September 2025 00:45:26 +0000 (0:00:00.759) 0:00:02.092 ****** 2025-09-19 00:46:45.589879 | orchestrator | changed: [testbed-manager] 2025-09-19 00:46:45.589889 | orchestrator | 2025-09-19 00:46:45.589900 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 00:46:45.589911 | orchestrator | Friday 19 September 2025 00:45:27 +0000 (0:00:01.190) 0:00:03.282 ****** 2025-09-19 00:46:45.589921 | orchestrator | changed: [testbed-manager] 2025-09-19 00:46:45.589932 | orchestrator | 2025-09-19 00:46:45.589943 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 00:46:45.589954 | orchestrator | Friday 19 September 2025 00:45:28 +0000 (0:00:00.852) 0:00:04.135 ****** 2025-09-19 00:46:45.589964 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 00:46:45.589975 | orchestrator | 2025-09-19 00:46:45.589986 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 00:46:45.589996 | orchestrator | Friday 19 September 2025 00:45:30 +0000 (0:00:01.391) 0:00:05.526 ****** 2025-09-19 00:46:45.590007 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 00:46:45.590064 | orchestrator | 2025-09-19 00:46:45.590080 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 00:46:45.590091 | orchestrator | Friday 19 September 2025 00:45:31 +0000 (0:00:00.864) 0:00:06.390 ****** 2025-09-19 00:46:45.590115 | orchestrator | ok: [testbed-manager] 2025-09-19 00:46:45.590129 | orchestrator | 2025-09-19 00:46:45.590142 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 00:46:45.590168 | orchestrator | Friday 19 September 2025 00:45:31 +0000 (0:00:00.366) 0:00:06.757 ****** 2025-09-19 00:46:45.590182 | orchestrator | ok: [testbed-manager] 2025-09-19 00:46:45.590194 | orchestrator | 2025-09-19 00:46:45.590206 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:46:45.590220 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:46:45.590233 | orchestrator | 2025-09-19 00:46:45.590245 | orchestrator | 2025-09-19 00:46:45.590258 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:46:45.590270 | orchestrator | Friday 19 September 2025 00:45:31 +0000 (0:00:00.297) 0:00:07.054 ****** 2025-09-19 00:46:45.590282 | orchestrator | =============================================================================== 2025-09-19 00:46:45.590295 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.39s 2025-09-19 00:46:45.590307 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2025-09-19 00:46:45.590320 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2025-09-19 00:46:45.590348 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.85s 2025-09-19 00:46:45.590361 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2025-09-19 00:46:45.590373 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2025-09-19 00:46:45.590386 | orchestrator | Get home directory of operator user ------------------------------------- 0.57s 2025-09-19 00:46:45.590398 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-09-19 00:46:45.590410 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2025-09-19 00:46:45.590422 | orchestrator | 2025-09-19 00:46:45.590434 | orchestrator | 2025-09-19 00:46:45.590447 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-19 00:46:45.590459 | orchestrator | 2025-09-19 00:46:45.590471 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 00:46:45.590482 | orchestrator | Friday 19 September 2025 00:44:24 +0000 (0:00:00.075) 0:00:00.075 ****** 2025-09-19 00:46:45.590492 | orchestrator | ok: [localhost] => { 2025-09-19 00:46:45.590504 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-19 00:46:45.590516 | orchestrator | } 2025-09-19 00:46:45.590527 | orchestrator | 2025-09-19 00:46:45.590538 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-19 00:46:45.590549 | orchestrator | Friday 19 September 2025 00:44:24 +0000 (0:00:00.034) 0:00:00.109 ****** 2025-09-19 00:46:45.590561 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-19 00:46:45.590573 | orchestrator | ...ignoring 2025-09-19 00:46:45.590585 | orchestrator | 2025-09-19 00:46:45.590596 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-19 00:46:45.590607 | orchestrator | Friday 19 September 2025 00:44:27 +0000 (0:00:02.790) 0:00:02.900 ****** 2025-09-19 00:46:45.590637 | orchestrator | skipping: [localhost] 2025-09-19 00:46:45.590649 | orchestrator | 2025-09-19 00:46:45.590660 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-19 00:46:45.590671 | orchestrator | Friday 19 September 2025 00:44:27 +0000 (0:00:00.047) 0:00:02.947 ****** 2025-09-19 00:46:45.590681 | orchestrator | ok: [localhost] 2025-09-19 00:46:45.590692 | orchestrator | 2025-09-19 00:46:45.590703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:46:45.590713 | orchestrator | 2025-09-19 00:46:45.590724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:46:45.590742 | orchestrator | Friday 19 September 2025 00:44:27 +0000 (0:00:00.141) 0:00:03.088 ****** 2025-09-19 00:46:45.590753 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:46:45.590764 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:46:45.590775 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:46:45.590785 | orchestrator | 2025-09-19 00:46:45.590796 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:46:45.590807 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.297) 0:00:03.386 ****** 2025-09-19 00:46:45.590817 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-19 00:46:45.590828 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-19 00:46:45.590839 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-19 00:46:45.590849 | orchestrator | 2025-09-19 00:46:45.590860 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-19 00:46:45.590871 | orchestrator | 2025-09-19 00:46:45.590881 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 00:46:45.590892 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.619) 0:00:04.005 ****** 2025-09-19 00:46:45.590903 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:46:45.590914 | orchestrator | 2025-09-19 00:46:45.590925 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 00:46:45.590935 | orchestrator | Friday 19 September 2025 00:44:29 +0000 (0:00:00.810) 0:00:04.815 ****** 2025-09-19 00:46:45.590946 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:46:45.590957 | orchestrator | 2025-09-19 00:46:45.590967 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-19 00:46:45.590978 | orchestrator | Friday 19 September 2025 00:44:30 +0000 (0:00:00.962) 0:00:05.778 ****** 2025-09-19 00:46:45.590988 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.590999 | orchestrator | 2025-09-19 00:46:45.591010 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-19 00:46:45.591020 | orchestrator | Friday 19 September 2025 00:44:30 +0000 (0:00:00.334) 0:00:06.113 ****** 2025-09-19 00:46:45.591031 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.591041 | orchestrator | 2025-09-19 00:46:45.591052 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-19 00:46:45.591064 | orchestrator | Friday 19 September 2025 00:44:31 +0000 (0:00:00.329) 0:00:06.442 ****** 2025-09-19 00:46:45.591075 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.591085 | orchestrator | 2025-09-19 00:46:45.591096 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-19 00:46:45.591107 | orchestrator | Friday 19 September 2025 00:44:31 +0000 (0:00:00.322) 0:00:06.765 ****** 2025-09-19 00:46:45.591117 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.591128 | orchestrator | 2025-09-19 00:46:45.591139 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 00:46:45.591150 | orchestrator | Friday 19 September 2025 00:44:32 +0000 (0:00:00.461) 0:00:07.226 ****** 2025-09-19 00:46:45.591161 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:46:45.591171 | orchestrator | 2025-09-19 00:46:45.591182 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 00:46:45.591202 | orchestrator | Friday 19 September 2025 00:44:33 +0000 (0:00:01.571) 0:00:08.797 ****** 2025-09-19 00:46:45.591213 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:46:45.591224 | orchestrator | 2025-09-19 00:46:45.591235 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-19 00:46:45.591246 | orchestrator | Friday 19 September 2025 00:44:35 +0000 (0:00:01.555) 0:00:10.353 ****** 2025-09-19 00:46:45.591256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.591267 | orchestrator | 2025-09-19 00:46:45.591278 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-19 00:46:45.591296 | orchestrator | Friday 19 September 2025 00:44:36 +0000 (0:00:01.430) 0:00:11.785 ****** 2025-09-19 00:46:45.591307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.591318 | orchestrator | 2025-09-19 00:46:45.591329 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-19 00:46:45.591339 | orchestrator | Friday 19 September 2025 00:44:37 +0000 (0:00:00.907) 0:00:12.693 ****** 2025-09-19 00:46:45.591431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.591461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.591479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.591492 | orchestrator | 2025-09-19 00:46:45.591503 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-19 00:46:45.591514 | orchestrator | Friday 19 September 2025 00:44:39 +0000 (0:00:01.495) 0:00:14.189 ****** 2025-09-19 00:46:45.591537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.591558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.591571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.591583 | orchestrator | 2025-09-19 00:46:45.591594 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-19 00:46:45.591606 | orchestrator | Friday 19 September 2025 00:44:40 +0000 (0:00:01.859) 0:00:16.049 ****** 2025-09-19 00:46:45.591646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 00:46:45.591659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 00:46:45.591670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 00:46:45.591681 | orchestrator | 2025-09-19 00:46:45.591691 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-19 00:46:45.591702 | orchestrator | Friday 19 September 2025 00:44:42 +0000 (0:00:01.904) 0:00:17.953 ****** 2025-09-19 00:46:45.591720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 00:46:45.591732 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 00:46:45.591743 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 00:46:45.591753 | orchestrator | 2025-09-19 00:46:45.591770 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-19 00:46:45.591782 | orchestrator | Friday 19 September 2025 00:44:44 +0000 (0:00:02.025) 0:00:19.979 ****** 2025-09-19 00:46:45.591792 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 00:46:45.591803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 00:46:45.591814 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 00:46:45.591825 | orchestrator | 2025-09-19 00:46:45.591836 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-19 00:46:45.591847 | orchestrator | Friday 19 September 2025 00:44:46 +0000 (0:00:01.496) 0:00:21.476 ****** 2025-09-19 00:46:45.591859 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 00:46:45.591870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 00:46:45.591881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 00:46:45.591891 | orchestrator | 2025-09-19 00:46:45.591902 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-19 00:46:45.591914 | orchestrator | Friday 19 September 2025 00:44:48 +0000 (0:00:02.250) 0:00:23.726 ****** 2025-09-19 00:46:45.591924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 00:46:45.591935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 00:46:45.591946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 00:46:45.591957 | orchestrator | 2025-09-19 00:46:45.591968 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-19 00:46:45.591979 | orchestrator | Friday 19 September 2025 00:44:50 +0000 (0:00:01.842) 0:00:25.569 ****** 2025-09-19 00:46:45.591990 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 00:46:45.592001 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 00:46:45.592012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 00:46:45.592023 | orchestrator | 2025-09-19 00:46:45.592034 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 00:46:45.592045 | orchestrator | Friday 19 September 2025 00:44:52 +0000 (0:00:01.683) 0:00:27.253 ****** 2025-09-19 00:46:45.592056 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.592067 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:46:45.592078 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:46:45.592089 | orchestrator | 2025-09-19 00:46:45.592099 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-19 00:46:45.592110 | orchestrator | Friday 19 September 2025 00:44:52 +0000 (0:00:00.532) 0:00:27.785 ****** 2025-09-19 00:46:45.592123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.592156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.592170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:46:45.592182 | orchestrator | 2025-09-19 00:46:45.592194 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-19 00:46:45.592204 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:01.560) 0:00:29.346 ****** 2025-09-19 00:46:45.592215 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:46:45.592227 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:46:45.592238 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:46:45.592249 | orchestrator | 2025-09-19 00:46:45.592260 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-19 00:46:45.592271 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.974) 0:00:30.321 ****** 2025-09-19 00:46:45.592282 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:46:45.592293 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:46:45.592304 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:46:45.592314 | orchestrator | 2025-09-19 00:46:45.592325 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-19 00:46:45.592336 | orchestrator | Friday 19 September 2025 00:45:03 +0000 (0:00:08.005) 0:00:38.326 ****** 2025-09-19 00:46:45.592347 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:46:45.592365 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:46:45.592376 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:46:45.592386 | orchestrator | 2025-09-19 00:46:45.592397 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 00:46:45.592408 | orchestrator | 2025-09-19 00:46:45.592418 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 00:46:45.592429 | orchestrator | Friday 19 September 2025 00:45:03 +0000 (0:00:00.422) 0:00:38.748 ****** 2025-09-19 00:46:45.592440 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:46:45.592451 | orchestrator | 2025-09-19 00:46:45.592462 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 00:46:45.592473 | orchestrator | Friday 19 September 2025 00:45:04 +0000 (0:00:00.775) 0:00:39.524 ****** 2025-09-19 00:46:45.592484 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:46:45.592494 | orchestrator | 2025-09-19 00:46:45.592505 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 00:46:45.592516 | orchestrator | Friday 19 September 2025 00:45:04 +0000 (0:00:00.269) 0:00:39.794 ****** 2025-09-19 00:46:45.592527 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:46:45.592538 | orchestrator | 2025-09-19 00:46:45.592549 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 00:46:45.592560 | orchestrator | Friday 19 September 2025 00:45:06 +0000 (0:00:01.629) 0:00:41.423 ****** 2025-09-19 00:46:45.592571 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:46:45.592581 | orchestrator | 2025-09-19 00:46:45.592592 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 00:46:45.592603 | orchestrator | 2025-09-19 00:46:45.592665 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 00:46:45.592680 | orchestrator | Friday 19 September 2025 00:46:02 +0000 (0:00:56.034) 0:01:37.457 ****** 2025-09-19 00:46:45.592696 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:46:45.592707 | orchestrator | 2025-09-19 00:46:45.592718 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 00:46:45.592729 | orchestrator | Friday 19 September 2025 00:46:02 +0000 (0:00:00.625) 0:01:38.083 ****** 2025-09-19 00:46:45.592740 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:46:45.592751 | orchestrator | 2025-09-19 00:46:45.592762 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 00:46:45.592773 | orchestrator | Friday 19 September 2025 00:46:03 +0000 (0:00:00.472) 0:01:38.555 ****** 2025-09-19 00:46:45.592783 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:46:45.592794 | orchestrator | 2025-09-19 00:46:45.592805 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 00:46:45.592815 | orchestrator | Friday 19 September 2025 00:46:10 +0000 (0:00:07.142) 0:01:45.698 ****** 2025-09-19 00:46:45.592826 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:46:45.592837 | orchestrator | 2025-09-19 00:46:45.592848 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 00:46:45.592858 | orchestrator | 2025-09-19 00:46:45.592869 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 00:46:45.592887 | orchestrator | Friday 19 September 2025 00:46:22 +0000 (0:00:11.624) 0:01:57.323 ****** 2025-09-19 00:46:45.592898 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:46:45.592909 | orchestrator | 2025-09-19 00:46:45.592920 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 00:46:45.592931 | orchestrator | Friday 19 September 2025 00:46:22 +0000 (0:00:00.694) 0:01:58.017 ****** 2025-09-19 00:46:45.592941 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:46:45.592952 | orchestrator | 2025-09-19 00:46:45.592963 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 00:46:45.592973 | orchestrator | Friday 19 September 2025 00:46:23 +0000 (0:00:00.464) 0:01:58.482 ****** 2025-09-19 00:46:45.592984 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:46:45.592995 | orchestrator | 2025-09-19 00:46:45.593006 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 00:46:45.593023 | orchestrator | Friday 19 September 2025 00:46:25 +0000 (0:00:01.748) 0:02:00.230 ****** 2025-09-19 00:46:45.593032 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:46:45.593042 | orchestrator | 2025-09-19 00:46:45.593052 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-19 00:46:45.593061 | orchestrator | 2025-09-19 00:46:45.593071 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-19 00:46:45.593080 | orchestrator | Friday 19 September 2025 00:46:41 +0000 (0:00:16.420) 0:02:16.650 ****** 2025-09-19 00:46:45.593089 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:46:45.593099 | orchestrator | 2025-09-19 00:46:45.593108 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-19 00:46:45.593118 | orchestrator | Friday 19 September 2025 00:46:42 +0000 (0:00:00.862) 0:02:17.512 ****** 2025-09-19 00:46:45.593127 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 00:46:45.593137 | orchestrator | enable_outward_rabbitmq_True 2025-09-19 00:46:45.593146 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 00:46:45.593156 | orchestrator | outward_rabbitmq_restart 2025-09-19 00:46:45.593165 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:46:45.593175 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:46:45.593184 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:46:45.593193 | orchestrator | 2025-09-19 00:46:45.593203 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-19 00:46:45.593212 | orchestrator | skipping: no hosts matched 2025-09-19 00:46:45.593222 | orchestrator | 2025-09-19 00:46:45.593232 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-19 00:46:45.593241 | orchestrator | skipping: no hosts matched 2025-09-19 00:46:45.593251 | orchestrator | 2025-09-19 00:46:45.593260 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-19 00:46:45.593270 | orchestrator | skipping: no hosts matched 2025-09-19 00:46:45.593279 | orchestrator | 2025-09-19 00:46:45.593289 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:46:45.593299 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 00:46:45.593309 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 00:46:45.593319 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:46:45.593329 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:46:45.593338 | orchestrator | 2025-09-19 00:46:45.593348 | orchestrator | 2025-09-19 00:46:45.593357 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:46:45.593367 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:02.564) 0:02:20.077 ****** 2025-09-19 00:46:45.593376 | orchestrator | =============================================================================== 2025-09-19 00:46:45.593386 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.08s 2025-09-19 00:46:45.593396 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.52s 2025-09-19 00:46:45.593405 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.01s 2025-09-19 00:46:45.593415 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.79s 2025-09-19 00:46:45.593424 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.56s 2025-09-19 00:46:45.593439 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.25s 2025-09-19 00:46:45.593448 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.10s 2025-09-19 00:46:45.593464 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.03s 2025-09-19 00:46:45.593473 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.90s 2025-09-19 00:46:45.593482 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.86s 2025-09-19 00:46:45.593492 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.84s 2025-09-19 00:46:45.593501 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.68s 2025-09-19 00:46:45.593510 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.57s 2025-09-19 00:46:45.593520 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.56s 2025-09-19 00:46:45.593529 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.56s 2025-09-19 00:46:45.593543 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.50s 2025-09-19 00:46:45.593553 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.50s 2025-09-19 00:46:45.593563 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.43s 2025-09-19 00:46:45.593572 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.21s 2025-09-19 00:46:45.593582 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.97s 2025-09-19 00:46:45.593591 | orchestrator | 2025-09-19 00:46:45 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:45.593601 | orchestrator | 2025-09-19 00:46:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:48.627850 | orchestrator | 2025-09-19 00:46:48 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:48.628790 | orchestrator | 2025-09-19 00:46:48 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:48.631106 | orchestrator | 2025-09-19 00:46:48 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:48.631132 | orchestrator | 2025-09-19 00:46:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:51.674519 | orchestrator | 2025-09-19 00:46:51 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:51.675232 | orchestrator | 2025-09-19 00:46:51 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:51.677546 | orchestrator | 2025-09-19 00:46:51 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:51.677943 | orchestrator | 2025-09-19 00:46:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:54.722901 | orchestrator | 2025-09-19 00:46:54 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:54.723178 | orchestrator | 2025-09-19 00:46:54 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:54.724732 | orchestrator | 2025-09-19 00:46:54 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:54.724880 | orchestrator | 2025-09-19 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:46:57.765908 | orchestrator | 2025-09-19 00:46:57 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:46:57.766178 | orchestrator | 2025-09-19 00:46:57 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:46:57.767119 | orchestrator | 2025-09-19 00:46:57 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:46:57.767145 | orchestrator | 2025-09-19 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:00.813908 | orchestrator | 2025-09-19 00:47:00 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:00.814729 | orchestrator | 2025-09-19 00:47:00 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:00.815478 | orchestrator | 2025-09-19 00:47:00 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:00.815506 | orchestrator | 2025-09-19 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:03.860351 | orchestrator | 2025-09-19 00:47:03 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:03.862113 | orchestrator | 2025-09-19 00:47:03 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:03.863363 | orchestrator | 2025-09-19 00:47:03 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:03.863398 | orchestrator | 2025-09-19 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:06.911448 | orchestrator | 2025-09-19 00:47:06 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:06.911548 | orchestrator | 2025-09-19 00:47:06 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:06.911563 | orchestrator | 2025-09-19 00:47:06 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:06.911632 | orchestrator | 2025-09-19 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:09.937958 | orchestrator | 2025-09-19 00:47:09 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:09.938123 | orchestrator | 2025-09-19 00:47:09 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:09.939224 | orchestrator | 2025-09-19 00:47:09 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:09.939255 | orchestrator | 2025-09-19 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:12.975718 | orchestrator | 2025-09-19 00:47:12 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:12.976323 | orchestrator | 2025-09-19 00:47:12 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:12.978005 | orchestrator | 2025-09-19 00:47:12 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:12.978818 | orchestrator | 2025-09-19 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:16.020785 | orchestrator | 2025-09-19 00:47:16 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:16.022155 | orchestrator | 2025-09-19 00:47:16 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:16.023843 | orchestrator | 2025-09-19 00:47:16 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:16.024291 | orchestrator | 2025-09-19 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:19.073298 | orchestrator | 2025-09-19 00:47:19 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:19.074213 | orchestrator | 2025-09-19 00:47:19 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:19.075392 | orchestrator | 2025-09-19 00:47:19 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:19.075535 | orchestrator | 2025-09-19 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:22.132995 | orchestrator | 2025-09-19 00:47:22 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:22.133127 | orchestrator | 2025-09-19 00:47:22 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:22.134267 | orchestrator | 2025-09-19 00:47:22 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:22.134314 | orchestrator | 2025-09-19 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:25.180902 | orchestrator | 2025-09-19 00:47:25 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:25.181812 | orchestrator | 2025-09-19 00:47:25 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:25.183633 | orchestrator | 2025-09-19 00:47:25 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:25.184752 | orchestrator | 2025-09-19 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:28.223788 | orchestrator | 2025-09-19 00:47:28 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:28.225995 | orchestrator | 2025-09-19 00:47:28 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:28.228140 | orchestrator | 2025-09-19 00:47:28 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:28.228371 | orchestrator | 2025-09-19 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:31.271566 | orchestrator | 2025-09-19 00:47:31 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state STARTED 2025-09-19 00:47:31.273545 | orchestrator | 2025-09-19 00:47:31 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:31.276185 | orchestrator | 2025-09-19 00:47:31 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:31.276266 | orchestrator | 2025-09-19 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:34.325011 | orchestrator | 2025-09-19 00:47:34 | INFO  | Task d2a96dd8-5436-4615-bed6-ed383f234c2a is in state SUCCESS 2025-09-19 00:47:34.325116 | orchestrator | 2025-09-19 00:47:34.328032 | orchestrator | 2025-09-19 00:47:34.328080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:47:34.328092 | orchestrator | 2025-09-19 00:47:34.328104 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:47:34.328115 | orchestrator | Friday 19 September 2025 00:45:15 +0000 (0:00:00.279) 0:00:00.279 ****** 2025-09-19 00:47:34.328126 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:47:34.328138 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:47:34.328149 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:47:34.328160 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.328170 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.328181 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.328192 | orchestrator | 2025-09-19 00:47:34.328203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:47:34.328214 | orchestrator | Friday 19 September 2025 00:45:17 +0000 (0:00:01.513) 0:00:01.793 ****** 2025-09-19 00:47:34.328225 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-19 00:47:34.328236 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-19 00:47:34.328247 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-19 00:47:34.328258 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-19 00:47:34.328268 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-19 00:47:34.328279 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-19 00:47:34.328290 | orchestrator | 2025-09-19 00:47:34.328301 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-19 00:47:34.328312 | orchestrator | 2025-09-19 00:47:34.328322 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-19 00:47:34.328355 | orchestrator | Friday 19 September 2025 00:45:18 +0000 (0:00:01.031) 0:00:02.825 ****** 2025-09-19 00:47:34.328368 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:47:34.328379 | orchestrator | 2025-09-19 00:47:34.328390 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-19 00:47:34.328401 | orchestrator | Friday 19 September 2025 00:45:19 +0000 (0:00:01.678) 0:00:04.504 ****** 2025-09-19 00:47:34.328413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328624 | orchestrator | 2025-09-19 00:47:34.328635 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-19 00:47:34.328646 | orchestrator | Friday 19 September 2025 00:45:21 +0000 (0:00:01.468) 0:00:05.972 ****** 2025-09-19 00:47:34.328657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328734 | orchestrator | 2025-09-19 00:47:34.328744 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-19 00:47:34.328755 | orchestrator | Friday 19 September 2025 00:45:23 +0000 (0:00:01.764) 0:00:07.737 ****** 2025-09-19 00:47:34.328766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328852 | orchestrator | 2025-09-19 00:47:34.328863 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-19 00:47:34.328873 | orchestrator | Friday 19 September 2025 00:45:24 +0000 (0:00:01.324) 0:00:09.061 ****** 2025-09-19 00:47:34.328884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.328974 | orchestrator | 2025-09-19 00:47:34.328986 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-19 00:47:34.328996 | orchestrator | Friday 19 September 2025 00:45:26 +0000 (0:00:02.141) 0:00:11.202 ****** 2025-09-19 00:47:34.329008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.329019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.329030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.329041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.329052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.329063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.329073 | orchestrator | 2025-09-19 00:47:34.329085 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-19 00:47:34.329095 | orchestrator | Friday 19 September 2025 00:45:29 +0000 (0:00:02.532) 0:00:13.735 ****** 2025-09-19 00:47:34.329106 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:47:34.329118 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:47:34.329129 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.329139 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:47:34.329150 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.329169 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.329180 | orchestrator | 2025-09-19 00:47:34.329190 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-19 00:47:34.329202 | orchestrator | Friday 19 September 2025 00:45:31 +0000 (0:00:02.446) 0:00:16.182 ****** 2025-09-19 00:47:34.329214 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-19 00:47:34.329228 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-19 00:47:34.329245 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-19 00:47:34.329264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-19 00:47:34.329276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-19 00:47:34.329289 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-19 00:47:34.329301 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 00:47:34.329314 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 00:47:34.329325 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 00:47:34.329337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 00:47:34.329349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 00:47:34.329360 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 00:47:34.329373 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 00:47:34.329386 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 00:47:34.329399 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 00:47:34.329411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 00:47:34.329423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 00:47:34.329435 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 00:47:34.329448 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 00:47:34.329460 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 00:47:34.329472 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 00:47:34.329485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 00:47:34.329497 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 00:47:34.329527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 00:47:34.329540 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 00:47:34.329553 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 00:47:34.329564 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 00:47:34.329575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 00:47:34.329592 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 00:47:34.329603 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 00:47:34.329614 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 00:47:34.329625 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 00:47:34.329636 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 00:47:34.329646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 00:47:34.329657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 00:47:34.329668 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 00:47:34.329679 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 00:47:34.329690 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 00:47:34.329701 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 00:47:34.329716 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 00:47:34.329733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 00:47:34.329744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-19 00:47:34.329755 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 00:47:34.329766 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-19 00:47:34.329777 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-19 00:47:34.329788 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-19 00:47:34.329799 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-19 00:47:34.329809 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 00:47:34.329820 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-19 00:47:34.329831 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 00:47:34.329842 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 00:47:34.329853 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 00:47:34.329863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 00:47:34.329874 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 00:47:34.329885 | orchestrator | 2025-09-19 00:47:34.329896 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 00:47:34.329915 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:19.796) 0:00:35.978 ****** 2025-09-19 00:47:34.329925 | orchestrator | 2025-09-19 00:47:34.329936 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 00:47:34.329947 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:00.063) 0:00:36.041 ****** 2025-09-19 00:47:34.329958 | orchestrator | 2025-09-19 00:47:34.329968 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 00:47:34.329979 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:00.064) 0:00:36.106 ****** 2025-09-19 00:47:34.329990 | orchestrator | 2025-09-19 00:47:34.330000 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 00:47:34.330011 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:00.063) 0:00:36.170 ****** 2025-09-19 00:47:34.330097 | orchestrator | 2025-09-19 00:47:34.330108 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 00:47:34.330119 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:00.062) 0:00:36.232 ****** 2025-09-19 00:47:34.330130 | orchestrator | 2025-09-19 00:47:34.330140 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 00:47:34.330151 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:00.060) 0:00:36.292 ****** 2025-09-19 00:47:34.330162 | orchestrator | 2025-09-19 00:47:34.330172 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-19 00:47:34.330183 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:00.068) 0:00:36.361 ****** 2025-09-19 00:47:34.330194 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:47:34.330205 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:47:34.330215 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:47:34.330226 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.330237 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.330247 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.330258 | orchestrator | 2025-09-19 00:47:34.330269 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-19 00:47:34.330388 | orchestrator | Friday 19 September 2025 00:45:53 +0000 (0:00:01.630) 0:00:37.991 ****** 2025-09-19 00:47:34.330406 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.330417 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:47:34.330427 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.330438 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.330449 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:47:34.330459 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:47:34.330470 | orchestrator | 2025-09-19 00:47:34.330480 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-19 00:47:34.330491 | orchestrator | 2025-09-19 00:47:34.330501 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 00:47:34.330534 | orchestrator | Friday 19 September 2025 00:46:21 +0000 (0:00:28.633) 0:01:06.624 ****** 2025-09-19 00:47:34.330603 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:47:34.330618 | orchestrator | 2025-09-19 00:47:34.330628 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 00:47:34.330639 | orchestrator | Friday 19 September 2025 00:46:22 +0000 (0:00:00.683) 0:01:07.308 ****** 2025-09-19 00:47:34.330656 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:47:34.330667 | orchestrator | 2025-09-19 00:47:34.330687 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-19 00:47:34.330699 | orchestrator | Friday 19 September 2025 00:46:23 +0000 (0:00:00.534) 0:01:07.843 ****** 2025-09-19 00:47:34.330709 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.330720 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.330731 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.330742 | orchestrator | 2025-09-19 00:47:34.330752 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-19 00:47:34.330772 | orchestrator | Friday 19 September 2025 00:46:24 +0000 (0:00:01.286) 0:01:09.129 ****** 2025-09-19 00:47:34.330783 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.330794 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.330804 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.330815 | orchestrator | 2025-09-19 00:47:34.330826 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-19 00:47:34.330836 | orchestrator | Friday 19 September 2025 00:46:24 +0000 (0:00:00.340) 0:01:09.470 ****** 2025-09-19 00:47:34.330847 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.330857 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.330868 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.330878 | orchestrator | 2025-09-19 00:47:34.330889 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-19 00:47:34.330900 | orchestrator | Friday 19 September 2025 00:46:25 +0000 (0:00:00.344) 0:01:09.814 ****** 2025-09-19 00:47:34.330910 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.330921 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.330932 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.330942 | orchestrator | 2025-09-19 00:47:34.330953 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-19 00:47:34.330964 | orchestrator | Friday 19 September 2025 00:46:25 +0000 (0:00:00.378) 0:01:10.192 ****** 2025-09-19 00:47:34.330974 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.330985 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.330996 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.331006 | orchestrator | 2025-09-19 00:47:34.331017 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-19 00:47:34.331028 | orchestrator | Friday 19 September 2025 00:46:26 +0000 (0:00:00.506) 0:01:10.699 ****** 2025-09-19 00:47:34.331038 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331049 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331060 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331070 | orchestrator | 2025-09-19 00:47:34.331081 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-19 00:47:34.331092 | orchestrator | Friday 19 September 2025 00:46:26 +0000 (0:00:00.293) 0:01:10.992 ****** 2025-09-19 00:47:34.331102 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331113 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331124 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331134 | orchestrator | 2025-09-19 00:47:34.331145 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-19 00:47:34.331156 | orchestrator | Friday 19 September 2025 00:46:26 +0000 (0:00:00.312) 0:01:11.304 ****** 2025-09-19 00:47:34.331166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331177 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331188 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331198 | orchestrator | 2025-09-19 00:47:34.331209 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-19 00:47:34.331220 | orchestrator | Friday 19 September 2025 00:46:26 +0000 (0:00:00.275) 0:01:11.580 ****** 2025-09-19 00:47:34.331230 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331241 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331252 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331262 | orchestrator | 2025-09-19 00:47:34.331273 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-19 00:47:34.331284 | orchestrator | Friday 19 September 2025 00:46:27 +0000 (0:00:00.521) 0:01:12.102 ****** 2025-09-19 00:47:34.331294 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331305 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331315 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331326 | orchestrator | 2025-09-19 00:47:34.331337 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-19 00:47:34.331348 | orchestrator | Friday 19 September 2025 00:46:27 +0000 (0:00:00.288) 0:01:12.391 ****** 2025-09-19 00:47:34.331365 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331375 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331386 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331397 | orchestrator | 2025-09-19 00:47:34.331407 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-19 00:47:34.331418 | orchestrator | Friday 19 September 2025 00:46:28 +0000 (0:00:00.328) 0:01:12.720 ****** 2025-09-19 00:47:34.331429 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331440 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331450 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331461 | orchestrator | 2025-09-19 00:47:34.331472 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-19 00:47:34.331482 | orchestrator | Friday 19 September 2025 00:46:28 +0000 (0:00:00.336) 0:01:13.056 ****** 2025-09-19 00:47:34.331493 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331504 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331596 | orchestrator | 2025-09-19 00:47:34.331606 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-19 00:47:34.331617 | orchestrator | Friday 19 September 2025 00:46:28 +0000 (0:00:00.493) 0:01:13.550 ****** 2025-09-19 00:47:34.331628 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331639 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331650 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331660 | orchestrator | 2025-09-19 00:47:34.331671 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-19 00:47:34.331682 | orchestrator | Friday 19 September 2025 00:46:29 +0000 (0:00:00.318) 0:01:13.868 ****** 2025-09-19 00:47:34.331693 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331704 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331715 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331725 | orchestrator | 2025-09-19 00:47:34.331742 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-19 00:47:34.331754 | orchestrator | Friday 19 September 2025 00:46:29 +0000 (0:00:00.319) 0:01:14.188 ****** 2025-09-19 00:47:34.331764 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331775 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331786 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331797 | orchestrator | 2025-09-19 00:47:34.331808 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-19 00:47:34.331818 | orchestrator | Friday 19 September 2025 00:46:29 +0000 (0:00:00.336) 0:01:14.524 ****** 2025-09-19 00:47:34.331829 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.331840 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.331851 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.331861 | orchestrator | 2025-09-19 00:47:34.331872 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 00:47:34.331883 | orchestrator | Friday 19 September 2025 00:46:30 +0000 (0:00:00.512) 0:01:15.036 ****** 2025-09-19 00:47:34.331893 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:47:34.331904 | orchestrator | 2025-09-19 00:47:34.331915 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-19 00:47:34.331926 | orchestrator | Friday 19 September 2025 00:46:30 +0000 (0:00:00.536) 0:01:15.572 ****** 2025-09-19 00:47:34.331936 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.331947 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.331958 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.331968 | orchestrator | 2025-09-19 00:47:34.331979 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-19 00:47:34.331990 | orchestrator | Friday 19 September 2025 00:46:31 +0000 (0:00:00.410) 0:01:15.983 ****** 2025-09-19 00:47:34.332001 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.332011 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.332029 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.332040 | orchestrator | 2025-09-19 00:47:34.332051 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-19 00:47:34.332061 | orchestrator | Friday 19 September 2025 00:46:32 +0000 (0:00:00.702) 0:01:16.686 ****** 2025-09-19 00:47:34.332071 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.332081 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.332090 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.332100 | orchestrator | 2025-09-19 00:47:34.332109 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-19 00:47:34.332119 | orchestrator | Friday 19 September 2025 00:46:32 +0000 (0:00:00.478) 0:01:17.164 ****** 2025-09-19 00:47:34.332129 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.332138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.332147 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.332157 | orchestrator | 2025-09-19 00:47:34.332166 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-19 00:47:34.332176 | orchestrator | Friday 19 September 2025 00:46:33 +0000 (0:00:00.605) 0:01:17.769 ****** 2025-09-19 00:47:34.332185 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.332195 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.332204 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.332213 | orchestrator | 2025-09-19 00:47:34.332223 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-19 00:47:34.332232 | orchestrator | Friday 19 September 2025 00:46:33 +0000 (0:00:00.698) 0:01:18.467 ****** 2025-09-19 00:47:34.332242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.332251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.332261 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.332270 | orchestrator | 2025-09-19 00:47:34.332279 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-19 00:47:34.332289 | orchestrator | Friday 19 September 2025 00:46:34 +0000 (0:00:00.891) 0:01:19.359 ****** 2025-09-19 00:47:34.332299 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.332308 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.332318 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.332327 | orchestrator | 2025-09-19 00:47:34.332337 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-19 00:47:34.332346 | orchestrator | Friday 19 September 2025 00:46:35 +0000 (0:00:00.477) 0:01:19.836 ****** 2025-09-19 00:47:34.332356 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.332365 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.332375 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.332384 | orchestrator | 2025-09-19 00:47:34.332394 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 00:47:34.332403 | orchestrator | Friday 19 September 2025 00:46:35 +0000 (0:00:00.533) 0:01:20.370 ****** 2025-09-19 00:47:34.332443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332579 | orchestrator | 2025-09-19 00:47:34.332588 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 00:47:34.332598 | orchestrator | Friday 19 September 2025 00:46:37 +0000 (0:00:01.648) 0:01:22.018 ****** 2025-09-19 00:47:34.332608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332718 | orchestrator | 2025-09-19 00:47:34.332728 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 00:47:34.332737 | orchestrator | Friday 19 September 2025 00:46:41 +0000 (0:00:04.498) 0:01:26.517 ****** 2025-09-19 00:47:34.332747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.332853 | orchestrator | 2025-09-19 00:47:34.332863 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 00:47:34.332873 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:02.319) 0:01:28.836 ****** 2025-09-19 00:47:34.332882 | orchestrator | 2025-09-19 00:47:34.332892 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 00:47:34.332901 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.081) 0:01:28.918 ****** 2025-09-19 00:47:34.332911 | orchestrator | 2025-09-19 00:47:34.332920 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 00:47:34.332930 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.065) 0:01:28.984 ****** 2025-09-19 00:47:34.332939 | orchestrator | 2025-09-19 00:47:34.332949 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 00:47:34.332959 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.065) 0:01:29.050 ****** 2025-09-19 00:47:34.332974 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.332983 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.332993 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.333002 | orchestrator | 2025-09-19 00:47:34.333012 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 00:47:34.333021 | orchestrator | Friday 19 September 2025 00:46:51 +0000 (0:00:07.515) 0:01:36.566 ****** 2025-09-19 00:47:34.333031 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.333040 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.333050 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.333059 | orchestrator | 2025-09-19 00:47:34.333069 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 00:47:34.333078 | orchestrator | Friday 19 September 2025 00:46:54 +0000 (0:00:02.846) 0:01:39.413 ****** 2025-09-19 00:47:34.333088 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.333097 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.333106 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.333116 | orchestrator | 2025-09-19 00:47:34.333125 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 00:47:34.333135 | orchestrator | Friday 19 September 2025 00:46:57 +0000 (0:00:02.551) 0:01:41.964 ****** 2025-09-19 00:47:34.333144 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.333154 | orchestrator | 2025-09-19 00:47:34.333163 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 00:47:34.333173 | orchestrator | Friday 19 September 2025 00:46:57 +0000 (0:00:00.109) 0:01:42.074 ****** 2025-09-19 00:47:34.333182 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.333197 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.333206 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.333216 | orchestrator | 2025-09-19 00:47:34.333230 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 00:47:34.333240 | orchestrator | Friday 19 September 2025 00:46:58 +0000 (0:00:00.887) 0:01:42.961 ****** 2025-09-19 00:47:34.333250 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.333260 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.333269 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.333278 | orchestrator | 2025-09-19 00:47:34.333288 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 00:47:34.333297 | orchestrator | Friday 19 September 2025 00:46:58 +0000 (0:00:00.604) 0:01:43.566 ****** 2025-09-19 00:47:34.333307 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.333316 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.333326 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.333335 | orchestrator | 2025-09-19 00:47:34.333345 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 00:47:34.333354 | orchestrator | Friday 19 September 2025 00:46:59 +0000 (0:00:01.069) 0:01:44.636 ****** 2025-09-19 00:47:34.333364 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.333373 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.333383 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.333392 | orchestrator | 2025-09-19 00:47:34.333402 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 00:47:34.333411 | orchestrator | Friday 19 September 2025 00:47:00 +0000 (0:00:00.642) 0:01:45.279 ****** 2025-09-19 00:47:34.333421 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.333430 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.333440 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.333449 | orchestrator | 2025-09-19 00:47:34.333459 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 00:47:34.333468 | orchestrator | Friday 19 September 2025 00:47:01 +0000 (0:00:00.749) 0:01:46.028 ****** 2025-09-19 00:47:34.333478 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.333487 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.333497 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.333506 | orchestrator | 2025-09-19 00:47:34.333538 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-19 00:47:34.333548 | orchestrator | Friday 19 September 2025 00:47:02 +0000 (0:00:00.812) 0:01:46.840 ****** 2025-09-19 00:47:34.333558 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.333567 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.333577 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.333586 | orchestrator | 2025-09-19 00:47:34.333596 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 00:47:34.333605 | orchestrator | Friday 19 September 2025 00:47:02 +0000 (0:00:00.526) 0:01:47.367 ****** 2025-09-19 00:47:34.333615 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333625 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333645 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333670 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333695 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333710 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333720 | orchestrator | 2025-09-19 00:47:34.333730 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 00:47:34.333739 | orchestrator | Friday 19 September 2025 00:47:04 +0000 (0:00:01.513) 0:01:48.880 ****** 2025-09-19 00:47:34.333749 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333759 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333769 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333779 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333818 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333854 | orchestrator | 2025-09-19 00:47:34.333863 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 00:47:34.333873 | orchestrator | Friday 19 September 2025 00:47:08 +0000 (0:00:04.277) 0:01:53.157 ****** 2025-09-19 00:47:34.333883 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333923 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333962 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 00:47:34.333989 | orchestrator | 2025-09-19 00:47:34.333998 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 00:47:34.334008 | orchestrator | Friday 19 September 2025 00:47:11 +0000 (0:00:03.164) 0:01:56.321 ****** 2025-09-19 00:47:34.334048 | orchestrator | 2025-09-19 00:47:34.334060 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 00:47:34.334069 | orchestrator | Friday 19 September 2025 00:47:11 +0000 (0:00:00.120) 0:01:56.442 ****** 2025-09-19 00:47:34.334079 | orchestrator | 2025-09-19 00:47:34.334088 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 00:47:34.334098 | orchestrator | Friday 19 September 2025 00:47:12 +0000 (0:00:00.276) 0:01:56.719 ****** 2025-09-19 00:47:34.334107 | orchestrator | 2025-09-19 00:47:34.334116 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 00:47:34.334126 | orchestrator | Friday 19 September 2025 00:47:12 +0000 (0:00:00.063) 0:01:56.782 ****** 2025-09-19 00:47:34.334135 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.334145 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.334154 | orchestrator | 2025-09-19 00:47:34.334164 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 00:47:34.334173 | orchestrator | Friday 19 September 2025 00:47:13 +0000 (0:00:01.377) 0:01:58.160 ****** 2025-09-19 00:47:34.334183 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.334194 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.334210 | orchestrator | 2025-09-19 00:47:34.334226 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 00:47:34.334241 | orchestrator | Friday 19 September 2025 00:47:19 +0000 (0:00:06.357) 0:02:04.518 ****** 2025-09-19 00:47:34.334258 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:47:34.334275 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:47:34.334290 | orchestrator | 2025-09-19 00:47:34.334307 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 00:47:34.334317 | orchestrator | Friday 19 September 2025 00:47:26 +0000 (0:00:06.260) 0:02:10.778 ****** 2025-09-19 00:47:34.334326 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:47:34.334335 | orchestrator | 2025-09-19 00:47:34.334367 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 00:47:34.334377 | orchestrator | Friday 19 September 2025 00:47:26 +0000 (0:00:00.131) 0:02:10.910 ****** 2025-09-19 00:47:34.334387 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.334397 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.334406 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.334415 | orchestrator | 2025-09-19 00:47:34.334425 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 00:47:34.334434 | orchestrator | Friday 19 September 2025 00:47:27 +0000 (0:00:00.850) 0:02:11.760 ****** 2025-09-19 00:47:34.334444 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.334453 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.334463 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.334472 | orchestrator | 2025-09-19 00:47:34.334481 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 00:47:34.334491 | orchestrator | Friday 19 September 2025 00:47:27 +0000 (0:00:00.628) 0:02:12.389 ****** 2025-09-19 00:47:34.334500 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.334532 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.334550 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.334567 | orchestrator | 2025-09-19 00:47:34.334577 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 00:47:34.334586 | orchestrator | Friday 19 September 2025 00:47:28 +0000 (0:00:00.896) 0:02:13.285 ****** 2025-09-19 00:47:34.334596 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:47:34.334605 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:47:34.334615 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:47:34.334624 | orchestrator | 2025-09-19 00:47:34.334633 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 00:47:34.334643 | orchestrator | Friday 19 September 2025 00:47:29 +0000 (0:00:00.680) 0:02:13.966 ****** 2025-09-19 00:47:34.334652 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.334662 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.334671 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.334680 | orchestrator | 2025-09-19 00:47:34.334690 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 00:47:34.334699 | orchestrator | Friday 19 September 2025 00:47:30 +0000 (0:00:00.753) 0:02:14.720 ****** 2025-09-19 00:47:34.334709 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:47:34.334718 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:47:34.334728 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:47:34.334737 | orchestrator | 2025-09-19 00:47:34.334746 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:47:34.334756 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 00:47:34.334771 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 00:47:34.334788 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 00:47:34.334798 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:47:34.334808 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:47:34.334818 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:47:34.334827 | orchestrator | 2025-09-19 00:47:34.334837 | orchestrator | 2025-09-19 00:47:34.334846 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:47:34.334856 | orchestrator | Friday 19 September 2025 00:47:31 +0000 (0:00:01.096) 0:02:15.817 ****** 2025-09-19 00:47:34.334865 | orchestrator | =============================================================================== 2025-09-19 00:47:34.334875 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.63s 2025-09-19 00:47:34.334884 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.80s 2025-09-19 00:47:34.334894 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.20s 2025-09-19 00:47:34.334903 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.89s 2025-09-19 00:47:34.334912 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.81s 2025-09-19 00:47:34.334922 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.50s 2025-09-19 00:47:34.334931 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.28s 2025-09-19 00:47:34.334940 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.16s 2025-09-19 00:47:34.334950 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.53s 2025-09-19 00:47:34.334959 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.45s 2025-09-19 00:47:34.334969 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.32s 2025-09-19 00:47:34.334984 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.14s 2025-09-19 00:47:34.334994 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.76s 2025-09-19 00:47:34.335003 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.68s 2025-09-19 00:47:34.335013 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2025-09-19 00:47:34.335022 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.63s 2025-09-19 00:47:34.335031 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.51s 2025-09-19 00:47:34.335041 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-09-19 00:47:34.335050 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.47s 2025-09-19 00:47:34.335060 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.32s 2025-09-19 00:47:34.335072 | orchestrator | 2025-09-19 00:47:34 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:34.335088 | orchestrator | 2025-09-19 00:47:34 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:34.335104 | orchestrator | 2025-09-19 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:37.373260 | orchestrator | 2025-09-19 00:47:37 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:37.375049 | orchestrator | 2025-09-19 00:47:37 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:37.375653 | orchestrator | 2025-09-19 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:40.411639 | orchestrator | 2025-09-19 00:47:40 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:40.413422 | orchestrator | 2025-09-19 00:47:40 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:40.413480 | orchestrator | 2025-09-19 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:43.459671 | orchestrator | 2025-09-19 00:47:43 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:43.461615 | orchestrator | 2025-09-19 00:47:43 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:43.461693 | orchestrator | 2025-09-19 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:46.497565 | orchestrator | 2025-09-19 00:47:46 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:46.498732 | orchestrator | 2025-09-19 00:47:46 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:46.499092 | orchestrator | 2025-09-19 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:49.548003 | orchestrator | 2025-09-19 00:47:49 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:49.550162 | orchestrator | 2025-09-19 00:47:49 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:49.550326 | orchestrator | 2025-09-19 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:52.604361 | orchestrator | 2025-09-19 00:47:52 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:52.604442 | orchestrator | 2025-09-19 00:47:52 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:52.604454 | orchestrator | 2025-09-19 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:55.645874 | orchestrator | 2025-09-19 00:47:55 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:55.647234 | orchestrator | 2025-09-19 00:47:55 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:55.647361 | orchestrator | 2025-09-19 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:47:58.696899 | orchestrator | 2025-09-19 00:47:58 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:47:58.697406 | orchestrator | 2025-09-19 00:47:58 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:47:58.697718 | orchestrator | 2025-09-19 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:01.744624 | orchestrator | 2025-09-19 00:48:01 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:01.745172 | orchestrator | 2025-09-19 00:48:01 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:01.745203 | orchestrator | 2025-09-19 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:04.775457 | orchestrator | 2025-09-19 00:48:04 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:04.778373 | orchestrator | 2025-09-19 00:48:04 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:04.778395 | orchestrator | 2025-09-19 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:07.821308 | orchestrator | 2025-09-19 00:48:07 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:07.822209 | orchestrator | 2025-09-19 00:48:07 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:07.822235 | orchestrator | 2025-09-19 00:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:10.859656 | orchestrator | 2025-09-19 00:48:10 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:10.860257 | orchestrator | 2025-09-19 00:48:10 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:10.860274 | orchestrator | 2025-09-19 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:13.909775 | orchestrator | 2025-09-19 00:48:13 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:13.910222 | orchestrator | 2025-09-19 00:48:13 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:13.910245 | orchestrator | 2025-09-19 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:16.951928 | orchestrator | 2025-09-19 00:48:16 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:16.952589 | orchestrator | 2025-09-19 00:48:16 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:16.952622 | orchestrator | 2025-09-19 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:20.001154 | orchestrator | 2025-09-19 00:48:19 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:20.002133 | orchestrator | 2025-09-19 00:48:20 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:20.002170 | orchestrator | 2025-09-19 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:23.045295 | orchestrator | 2025-09-19 00:48:23 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:23.047688 | orchestrator | 2025-09-19 00:48:23 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:23.047745 | orchestrator | 2025-09-19 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:26.085460 | orchestrator | 2025-09-19 00:48:26 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:26.087668 | orchestrator | 2025-09-19 00:48:26 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:26.090712 | orchestrator | 2025-09-19 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:29.133971 | orchestrator | 2025-09-19 00:48:29 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:29.136344 | orchestrator | 2025-09-19 00:48:29 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:29.136436 | orchestrator | 2025-09-19 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:32.186865 | orchestrator | 2025-09-19 00:48:32 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:32.189587 | orchestrator | 2025-09-19 00:48:32 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:32.189629 | orchestrator | 2025-09-19 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:35.230833 | orchestrator | 2025-09-19 00:48:35 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:35.230963 | orchestrator | 2025-09-19 00:48:35 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:35.230990 | orchestrator | 2025-09-19 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:38.272778 | orchestrator | 2025-09-19 00:48:38 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:38.274161 | orchestrator | 2025-09-19 00:48:38 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:38.274195 | orchestrator | 2025-09-19 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:41.319222 | orchestrator | 2025-09-19 00:48:41 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:41.322375 | orchestrator | 2025-09-19 00:48:41 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:41.322431 | orchestrator | 2025-09-19 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:44.361219 | orchestrator | 2025-09-19 00:48:44 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:44.362841 | orchestrator | 2025-09-19 00:48:44 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:44.363100 | orchestrator | 2025-09-19 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:47.395611 | orchestrator | 2025-09-19 00:48:47 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:47.396303 | orchestrator | 2025-09-19 00:48:47 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:47.396430 | orchestrator | 2025-09-19 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:50.450810 | orchestrator | 2025-09-19 00:48:50 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:50.451620 | orchestrator | 2025-09-19 00:48:50 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:50.451906 | orchestrator | 2025-09-19 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:53.519970 | orchestrator | 2025-09-19 00:48:53 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:53.520983 | orchestrator | 2025-09-19 00:48:53 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:53.521228 | orchestrator | 2025-09-19 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:56.569832 | orchestrator | 2025-09-19 00:48:56 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:56.569935 | orchestrator | 2025-09-19 00:48:56 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:56.569949 | orchestrator | 2025-09-19 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:48:59.612219 | orchestrator | 2025-09-19 00:48:59 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:48:59.613974 | orchestrator | 2025-09-19 00:48:59 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:48:59.614098 | orchestrator | 2025-09-19 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:02.663804 | orchestrator | 2025-09-19 00:49:02 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:02.667361 | orchestrator | 2025-09-19 00:49:02 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:02.667896 | orchestrator | 2025-09-19 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:05.719411 | orchestrator | 2025-09-19 00:49:05 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:05.721005 | orchestrator | 2025-09-19 00:49:05 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:05.721037 | orchestrator | 2025-09-19 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:08.769117 | orchestrator | 2025-09-19 00:49:08 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:08.772513 | orchestrator | 2025-09-19 00:49:08 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:08.772548 | orchestrator | 2025-09-19 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:11.823103 | orchestrator | 2025-09-19 00:49:11 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:11.826650 | orchestrator | 2025-09-19 00:49:11 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:11.827226 | orchestrator | 2025-09-19 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:14.877347 | orchestrator | 2025-09-19 00:49:14 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:14.879624 | orchestrator | 2025-09-19 00:49:14 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:14.879661 | orchestrator | 2025-09-19 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:17.918803 | orchestrator | 2025-09-19 00:49:17 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:17.918906 | orchestrator | 2025-09-19 00:49:17 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:17.918921 | orchestrator | 2025-09-19 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:20.965535 | orchestrator | 2025-09-19 00:49:20 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:20.969330 | orchestrator | 2025-09-19 00:49:20 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:20.969364 | orchestrator | 2025-09-19 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:24.018857 | orchestrator | 2025-09-19 00:49:24 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:24.019724 | orchestrator | 2025-09-19 00:49:24 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:24.019770 | orchestrator | 2025-09-19 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:27.055851 | orchestrator | 2025-09-19 00:49:27 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:27.056088 | orchestrator | 2025-09-19 00:49:27 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:27.056107 | orchestrator | 2025-09-19 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:30.095478 | orchestrator | 2025-09-19 00:49:30 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:30.096581 | orchestrator | 2025-09-19 00:49:30 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:30.096979 | orchestrator | 2025-09-19 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:33.135874 | orchestrator | 2025-09-19 00:49:33 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:33.136598 | orchestrator | 2025-09-19 00:49:33 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:33.136629 | orchestrator | 2025-09-19 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:36.196296 | orchestrator | 2025-09-19 00:49:36 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:36.198964 | orchestrator | 2025-09-19 00:49:36 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:36.198998 | orchestrator | 2025-09-19 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:39.248565 | orchestrator | 2025-09-19 00:49:39 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:39.249373 | orchestrator | 2025-09-19 00:49:39 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:39.249408 | orchestrator | 2025-09-19 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:42.295737 | orchestrator | 2025-09-19 00:49:42 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:42.298314 | orchestrator | 2025-09-19 00:49:42 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:42.298401 | orchestrator | 2025-09-19 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:45.342566 | orchestrator | 2025-09-19 00:49:45 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:45.344146 | orchestrator | 2025-09-19 00:49:45 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:45.344222 | orchestrator | 2025-09-19 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:48.380962 | orchestrator | 2025-09-19 00:49:48 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:48.382751 | orchestrator | 2025-09-19 00:49:48 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:48.382773 | orchestrator | 2025-09-19 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:51.426000 | orchestrator | 2025-09-19 00:49:51 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:51.427533 | orchestrator | 2025-09-19 00:49:51 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:51.427552 | orchestrator | 2025-09-19 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:54.473034 | orchestrator | 2025-09-19 00:49:54 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:54.473360 | orchestrator | 2025-09-19 00:49:54 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:54.473399 | orchestrator | 2025-09-19 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:49:57.521504 | orchestrator | 2025-09-19 00:49:57 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:49:57.521608 | orchestrator | 2025-09-19 00:49:57 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:49:57.521624 | orchestrator | 2025-09-19 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:00.565121 | orchestrator | 2025-09-19 00:50:00 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:00.566116 | orchestrator | 2025-09-19 00:50:00 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:00.566213 | orchestrator | 2025-09-19 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:03.620006 | orchestrator | 2025-09-19 00:50:03 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:03.623629 | orchestrator | 2025-09-19 00:50:03 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:03.623738 | orchestrator | 2025-09-19 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:06.672189 | orchestrator | 2025-09-19 00:50:06 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:06.673737 | orchestrator | 2025-09-19 00:50:06 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:06.673795 | orchestrator | 2025-09-19 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:09.712119 | orchestrator | 2025-09-19 00:50:09 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:09.712855 | orchestrator | 2025-09-19 00:50:09 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:09.712904 | orchestrator | 2025-09-19 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:12.752832 | orchestrator | 2025-09-19 00:50:12 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:12.754095 | orchestrator | 2025-09-19 00:50:12 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:12.754269 | orchestrator | 2025-09-19 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:15.809697 | orchestrator | 2025-09-19 00:50:15 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:15.811603 | orchestrator | 2025-09-19 00:50:15 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:15.811891 | orchestrator | 2025-09-19 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:18.852473 | orchestrator | 2025-09-19 00:50:18 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:18.854354 | orchestrator | 2025-09-19 00:50:18 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:18.854386 | orchestrator | 2025-09-19 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:21.905725 | orchestrator | 2025-09-19 00:50:21 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state STARTED 2025-09-19 00:50:21.906975 | orchestrator | 2025-09-19 00:50:21 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:21.907047 | orchestrator | 2025-09-19 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:24.960009 | orchestrator | 2025-09-19 00:50:24 | INFO  | Task c1911044-16b2-4edf-8ad1-a00d359215cf is in state SUCCESS 2025-09-19 00:50:24.961470 | orchestrator | 2025-09-19 00:50:24.961528 | orchestrator | 2025-09-19 00:50:24.961534 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:50:24.961541 | orchestrator | 2025-09-19 00:50:24.961546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:50:24.961555 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.258) 0:00:00.258 ****** 2025-09-19 00:50:24.961562 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.961570 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.961578 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.961585 | orchestrator | 2025-09-19 00:50:24.961595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:50:24.961605 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.384) 0:00:00.643 ****** 2025-09-19 00:50:24.961613 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-19 00:50:24.961621 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-19 00:50:24.961628 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-19 00:50:24.961636 | orchestrator | 2025-09-19 00:50:24.961643 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-19 00:50:24.961651 | orchestrator | 2025-09-19 00:50:24.961658 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 00:50:24.961667 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:00.668) 0:00:01.312 ****** 2025-09-19 00:50:24.961673 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.961677 | orchestrator | 2025-09-19 00:50:24.961682 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-19 00:50:24.961687 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:01.230) 0:00:02.543 ****** 2025-09-19 00:50:24.961692 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.961697 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.961701 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.961706 | orchestrator | 2025-09-19 00:50:24.961710 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 00:50:24.961715 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:00.824) 0:00:03.368 ****** 2025-09-19 00:50:24.961720 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.961724 | orchestrator | 2025-09-19 00:50:24.961729 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-19 00:50:24.961734 | orchestrator | Friday 19 September 2025 00:44:07 +0000 (0:00:00.978) 0:00:04.346 ****** 2025-09-19 00:50:24.961738 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.961743 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.961747 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.961752 | orchestrator | 2025-09-19 00:50:24.961756 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-19 00:50:24.961761 | orchestrator | Friday 19 September 2025 00:44:08 +0000 (0:00:00.733) 0:00:05.080 ****** 2025-09-19 00:50:24.961766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 00:50:24.961770 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 00:50:24.961775 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 00:50:24.961780 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 00:50:24.961784 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 00:50:24.961804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 00:50:24.961809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 00:50:24.961814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 00:50:24.961896 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 00:50:24.961903 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 00:50:24.961908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 00:50:24.961919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 00:50:24.961924 | orchestrator | 2025-09-19 00:50:24.961928 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 00:50:24.961934 | orchestrator | Friday 19 September 2025 00:44:11 +0000 (0:00:03.453) 0:00:08.533 ****** 2025-09-19 00:50:24.961942 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 00:50:24.961950 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 00:50:24.961958 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 00:50:24.961966 | orchestrator | 2025-09-19 00:50:24.961973 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 00:50:24.961981 | orchestrator | Friday 19 September 2025 00:44:12 +0000 (0:00:00.826) 0:00:09.360 ****** 2025-09-19 00:50:24.961990 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 00:50:24.961995 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 00:50:24.962000 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 00:50:24.962004 | orchestrator | 2025-09-19 00:50:24.962009 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 00:50:24.962106 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:01.384) 0:00:10.744 ****** 2025-09-19 00:50:24.962116 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-19 00:50:24.962121 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.962139 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-19 00:50:24.962144 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.962149 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-19 00:50:24.962184 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.962190 | orchestrator | 2025-09-19 00:50:24.962195 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-19 00:50:24.962200 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:00.832) 0:00:11.577 ****** 2025-09-19 00:50:24.962208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.962279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.962286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.962306 | orchestrator | 2025-09-19 00:50:24.962315 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-19 00:50:24.962324 | orchestrator | Friday 19 September 2025 00:44:17 +0000 (0:00:02.243) 0:00:13.821 ****** 2025-09-19 00:50:24.962332 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.962339 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.962347 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.962355 | orchestrator | 2025-09-19 00:50:24.962366 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-19 00:50:24.962376 | orchestrator | Friday 19 September 2025 00:44:18 +0000 (0:00:01.307) 0:00:15.128 ****** 2025-09-19 00:50:24.962386 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-19 00:50:24.962396 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-19 00:50:24.962406 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-19 00:50:24.962417 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-19 00:50:24.962427 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-19 00:50:24.962436 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-19 00:50:24.962444 | orchestrator | 2025-09-19 00:50:24.962451 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-19 00:50:24.962460 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:02.452) 0:00:17.581 ****** 2025-09-19 00:50:24.962467 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.962474 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.962482 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.962490 | orchestrator | 2025-09-19 00:50:24.962498 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-19 00:50:24.962505 | orchestrator | Friday 19 September 2025 00:44:23 +0000 (0:00:02.634) 0:00:20.215 ****** 2025-09-19 00:50:24.962512 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.962520 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.962527 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.962536 | orchestrator | 2025-09-19 00:50:24.962544 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-19 00:50:24.962553 | orchestrator | Friday 19 September 2025 00:44:24 +0000 (0:00:01.413) 0:00:21.629 ****** 2025-09-19 00:50:24.962563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.962578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.962587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.962601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 00:50:24.962611 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.962619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.962705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.962723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.962738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 00:50:24.962746 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.962755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.962769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.962777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.962786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 00:50:24.962794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.962802 | orchestrator | 2025-09-19 00:50:24.962810 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-19 00:50:24.962819 | orchestrator | Friday 19 September 2025 00:44:25 +0000 (0:00:00.530) 0:00:22.160 ****** 2025-09-19 00:50:24.962827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.962889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 00:50:24.962897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.962911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 00:50:24.962922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.962931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac', '__omit_place_holder__562eea0cfc7b41848ccce0ad3a443f08cf6a65ac'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 00:50:24.962936 | orchestrator | 2025-09-19 00:50:24.962941 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-19 00:50:24.962945 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:03.026) 0:00:25.186 ****** 2025-09-19 00:50:24.962950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.962994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.963002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.963007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.963015 | orchestrator | 2025-09-19 00:50:24.963019 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-19 00:50:24.963079 | orchestrator | Friday 19 September 2025 00:44:32 +0000 (0:00:03.811) 0:00:28.998 ****** 2025-09-19 00:50:24.963086 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 00:50:24.963137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 00:50:24.963161 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 00:50:24.963167 | orchestrator | 2025-09-19 00:50:24.963171 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-19 00:50:24.963176 | orchestrator | Friday 19 September 2025 00:44:35 +0000 (0:00:03.482) 0:00:32.480 ****** 2025-09-19 00:50:24.963181 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 00:50:24.963186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 00:50:24.963190 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 00:50:24.963195 | orchestrator | 2025-09-19 00:50:24.963199 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-19 00:50:24.963204 | orchestrator | Friday 19 September 2025 00:44:40 +0000 (0:00:04.866) 0:00:37.346 ****** 2025-09-19 00:50:24.963209 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.963214 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.963218 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.963224 | orchestrator | 2025-09-19 00:50:24.963232 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-19 00:50:24.963240 | orchestrator | Friday 19 September 2025 00:44:41 +0000 (0:00:00.569) 0:00:37.916 ****** 2025-09-19 00:50:24.963245 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 00:50:24.963275 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 00:50:24.963281 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 00:50:24.963285 | orchestrator | 2025-09-19 00:50:24.963290 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-19 00:50:24.963295 | orchestrator | Friday 19 September 2025 00:44:43 +0000 (0:00:02.338) 0:00:40.254 ****** 2025-09-19 00:50:24.963299 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 00:50:24.963304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 00:50:24.963310 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 00:50:24.963314 | orchestrator | 2025-09-19 00:50:24.963319 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-19 00:50:24.963323 | orchestrator | Friday 19 September 2025 00:44:45 +0000 (0:00:02.376) 0:00:42.630 ****** 2025-09-19 00:50:24.963332 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-19 00:50:24.963340 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-19 00:50:24.963354 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-19 00:50:24.963361 | orchestrator | 2025-09-19 00:50:24.963369 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-19 00:50:24.963377 | orchestrator | Friday 19 September 2025 00:44:48 +0000 (0:00:02.144) 0:00:44.774 ****** 2025-09-19 00:50:24.963386 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-19 00:50:24.963395 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-19 00:50:24.963403 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-19 00:50:24.963411 | orchestrator | 2025-09-19 00:50:24.963416 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 00:50:24.963421 | orchestrator | Friday 19 September 2025 00:44:49 +0000 (0:00:01.815) 0:00:46.590 ****** 2025-09-19 00:50:24.963426 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.963431 | orchestrator | 2025-09-19 00:50:24.963435 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-19 00:50:24.963443 | orchestrator | Friday 19 September 2025 00:44:50 +0000 (0:00:00.910) 0:00:47.501 ****** 2025-09-19 00:50:24.963448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.963458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.963463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.963468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.963473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.963482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.963489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.963494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.963504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.963509 | orchestrator | 2025-09-19 00:50:24.963513 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-19 00:50:24.963518 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:03.457) 0:00:50.958 ****** 2025-09-19 00:50:24.963523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963581 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.963585 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.963590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963616 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.963623 | orchestrator | 2025-09-19 00:50:24.963629 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-19 00:50:24.963638 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:01.314) 0:00:52.272 ****** 2025-09-19 00:50:24.963642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963661 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.963665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963682 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.963686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963702 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.963707 | orchestrator | 2025-09-19 00:50:24.963711 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 00:50:24.963715 | orchestrator | Friday 19 September 2025 00:44:57 +0000 (0:00:01.616) 0:00:53.888 ****** 2025-09-19 00:50:24.963722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963739 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.963743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963758 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.963765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.963787 | orchestrator | 2025-09-19 00:50:24.963791 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 00:50:24.963795 | orchestrator | Friday 19 September 2025 00:44:57 +0000 (0:00:00.760) 0:00:54.649 ****** 2025-09-19 00:50:24.963800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963815 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.963820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.963844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.963848 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.963853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.963857 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.963861 | orchestrator | 2025-09-19 00:50:24.963867 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 00:50:24.963872 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:00.609) 0:00:55.258 ****** 2025-09-19 00:50:24.963876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964340 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.964345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964358 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.964367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964398 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.964402 | orchestrator | 2025-09-19 00:50:24.964407 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-19 00:50:24.964411 | orchestrator | Friday 19 September 2025 00:45:01 +0000 (0:00:02.811) 0:00:58.070 ****** 2025-09-19 00:50:24.964416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964436 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.964440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964466 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.964471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964483 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.964488 | orchestrator | 2025-09-19 00:50:24.964492 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-19 00:50:24.964496 | orchestrator | Friday 19 September 2025 00:45:02 +0000 (0:00:00.735) 0:00:58.805 ****** 2025-09-19 00:50:24.964503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964529 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.964533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964546 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.964551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964570 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.964574 | orchestrator | 2025-09-19 00:50:24.964578 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-19 00:50:24.964591 | orchestrator | Friday 19 September 2025 00:45:02 +0000 (0:00:00.645) 0:00:59.450 ****** 2025-09-19 00:50:24.964596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964609 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.964614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964630 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.964644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 00:50:24.964649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 00:50:24.964653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 00:50:24.964657 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.964661 | orchestrator | 2025-09-19 00:50:24.964666 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-19 00:50:24.964670 | orchestrator | Friday 19 September 2025 00:45:03 +0000 (0:00:01.075) 0:01:00.526 ****** 2025-09-19 00:50:24.964674 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 00:50:24.964678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 00:50:24.964683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 00:50:24.964687 | orchestrator | 2025-09-19 00:50:24.964694 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-19 00:50:24.964698 | orchestrator | Friday 19 September 2025 00:45:05 +0000 (0:00:01.428) 0:01:01.954 ****** 2025-09-19 00:50:24.964702 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 00:50:24.964706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 00:50:24.964710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 00:50:24.964714 | orchestrator | 2025-09-19 00:50:24.964719 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-19 00:50:24.964723 | orchestrator | Friday 19 September 2025 00:45:06 +0000 (0:00:01.395) 0:01:03.350 ****** 2025-09-19 00:50:24.964727 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 00:50:24.964745 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 00:50:24.964749 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 00:50:24.964754 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.964760 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 00:50:24.964765 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 00:50:24.964769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.964773 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 00:50:24.964777 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.964781 | orchestrator | 2025-09-19 00:50:24.964785 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-19 00:50:24.964789 | orchestrator | Friday 19 September 2025 00:45:07 +0000 (0:00:00.998) 0:01:04.349 ****** 2025-09-19 00:50:24.964803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.964809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.964813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 00:50:24.964820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.964825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.964831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 00:50:24.964835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.964875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.964880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 00:50:24.964884 | orchestrator | 2025-09-19 00:50:24.964888 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-19 00:50:24.964892 | orchestrator | Friday 19 September 2025 00:45:10 +0000 (0:00:02.993) 0:01:07.342 ****** 2025-09-19 00:50:24.964897 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.964904 | orchestrator | 2025-09-19 00:50:24.964908 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-19 00:50:24.964912 | orchestrator | Friday 19 September 2025 00:45:11 +0000 (0:00:00.767) 0:01:08.110 ****** 2025-09-19 00:50:24.964917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 00:50:24.964923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.964930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.964934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.964942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 00:50:24.964947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.964955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.964960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.964968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 00:50:24.964973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.964981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.964988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965000 | orchestrator | 2025-09-19 00:50:24.965007 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-19 00:50:24.965015 | orchestrator | Friday 19 September 2025 00:45:18 +0000 (0:00:06.931) 0:01:15.041 ****** 2025-09-19 00:50:24.965022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 00:50:24.965031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.965039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965048 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.965057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 00:50:24.965065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.965070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965082 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.965319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 00:50:24.965333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.965350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965364 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.965369 | orchestrator | 2025-09-19 00:50:24.965373 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-19 00:50:24.965377 | orchestrator | Friday 19 September 2025 00:45:19 +0000 (0:00:01.118) 0:01:16.160 ****** 2025-09-19 00:50:24.965382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 00:50:24.965388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 00:50:24.965393 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.965397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 00:50:24.965401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 00:50:24.965405 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.965410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 00:50:24.965414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 00:50:24.965418 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.965422 | orchestrator | 2025-09-19 00:50:24.965426 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-19 00:50:24.965430 | orchestrator | Friday 19 September 2025 00:45:20 +0000 (0:00:01.108) 0:01:17.268 ****** 2025-09-19 00:50:24.965435 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.965439 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.965443 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.965447 | orchestrator | 2025-09-19 00:50:24.965451 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-19 00:50:24.965455 | orchestrator | Friday 19 September 2025 00:45:22 +0000 (0:00:01.781) 0:01:19.050 ****** 2025-09-19 00:50:24.965459 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.965463 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.965467 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.965472 | orchestrator | 2025-09-19 00:50:24.965478 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-19 00:50:24.965482 | orchestrator | Friday 19 September 2025 00:45:24 +0000 (0:00:02.089) 0:01:21.140 ****** 2025-09-19 00:50:24.965489 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.965496 | orchestrator | 2025-09-19 00:50:24.965503 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-19 00:50:24.965515 | orchestrator | Friday 19 September 2025 00:45:25 +0000 (0:00:01.131) 0:01:22.271 ****** 2025-09-19 00:50:24.965538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.965543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.965562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.965589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965598 | orchestrator | 2025-09-19 00:50:24.965602 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-19 00:50:24.965607 | orchestrator | Friday 19 September 2025 00:45:30 +0000 (0:00:04.603) 0:01:26.875 ****** 2025-09-19 00:50:24.965611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.965620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965640 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.965644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.965649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965657 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.965664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.965681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.965690 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.965694 | orchestrator | 2025-09-19 00:50:24.965698 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-19 00:50:24.965702 | orchestrator | Friday 19 September 2025 00:45:31 +0000 (0:00:00.819) 0:01:27.694 ****** 2025-09-19 00:50:24.965707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 00:50:24.965711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 00:50:24.965716 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.965720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 00:50:24.965724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 00:50:24.965729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 00:50:24.965733 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.965737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 00:50:24.965741 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.965749 | orchestrator | 2025-09-19 00:50:24.965753 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-19 00:50:24.965757 | orchestrator | Friday 19 September 2025 00:45:31 +0000 (0:00:00.688) 0:01:28.383 ****** 2025-09-19 00:50:24.965761 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.965765 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.965769 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.965773 | orchestrator | 2025-09-19 00:50:24.965778 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-19 00:50:24.965782 | orchestrator | Friday 19 September 2025 00:45:33 +0000 (0:00:01.807) 0:01:30.190 ****** 2025-09-19 00:50:24.965786 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.965790 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.965794 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.965798 | orchestrator | 2025-09-19 00:50:24.965802 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-19 00:50:24.965806 | orchestrator | Friday 19 September 2025 00:45:35 +0000 (0:00:01.969) 0:01:32.160 ****** 2025-09-19 00:50:24.965810 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.965814 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.965821 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.965825 | orchestrator | 2025-09-19 00:50:24.965829 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-19 00:50:24.965833 | orchestrator | Friday 19 September 2025 00:45:35 +0000 (0:00:00.392) 0:01:32.552 ****** 2025-09-19 00:50:24.966474 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.966483 | orchestrator | 2025-09-19 00:50:24.966487 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-19 00:50:24.966491 | orchestrator | Friday 19 September 2025 00:45:36 +0000 (0:00:00.589) 0:01:33.142 ****** 2025-09-19 00:50:24.966513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 00:50:24.966519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 00:50:24.966524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 00:50:24.966535 | orchestrator | 2025-09-19 00:50:24.966539 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-19 00:50:24.966543 | orchestrator | Friday 19 September 2025 00:45:38 +0000 (0:00:02.437) 0:01:35.580 ****** 2025-09-19 00:50:24.966548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 00:50:24.966552 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.966561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 00:50:24.966565 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.966580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 00:50:24.966585 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.966589 | orchestrator | 2025-09-19 00:50:24.966593 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-19 00:50:24.966597 | orchestrator | Friday 19 September 2025 00:45:40 +0000 (0:00:01.814) 0:01:37.394 ****** 2025-09-19 00:50:24.966603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 00:50:24.966612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 00:50:24.966618 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.966622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 00:50:24.966626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 00:50:24.966631 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.966635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 00:50:24.966641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 00:50:24.966646 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.966650 | orchestrator | 2025-09-19 00:50:24.966654 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-19 00:50:24.966658 | orchestrator | Friday 19 September 2025 00:45:42 +0000 (0:00:01.597) 0:01:38.992 ****** 2025-09-19 00:50:24.966662 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.966666 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.966670 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.966674 | orchestrator | 2025-09-19 00:50:24.966678 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-19 00:50:24.966682 | orchestrator | Friday 19 September 2025 00:45:42 +0000 (0:00:00.393) 0:01:39.386 ****** 2025-09-19 00:50:24.966687 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.966691 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.966695 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.966699 | orchestrator | 2025-09-19 00:50:24.966703 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-19 00:50:24.966716 | orchestrator | Friday 19 September 2025 00:45:44 +0000 (0:00:01.397) 0:01:40.783 ****** 2025-09-19 00:50:24.966721 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.966725 | orchestrator | 2025-09-19 00:50:24.966729 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-19 00:50:24.966733 | orchestrator | Friday 19 September 2025 00:45:44 +0000 (0:00:00.782) 0:01:41.565 ****** 2025-09-19 00:50:24.966741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.966747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.966772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.966800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966827 | orchestrator | 2025-09-19 00:50:24.966831 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-19 00:50:24.966835 | orchestrator | Friday 19 September 2025 00:45:47 +0000 (0:00:03.085) 0:01:44.650 ****** 2025-09-19 00:50:24.966840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.966844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966875 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.966879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.966883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966955 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.966975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.966983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.966996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967005 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967014 | orchestrator | 2025-09-19 00:50:24.967021 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-19 00:50:24.967027 | orchestrator | Friday 19 September 2025 00:45:48 +0000 (0:00:00.677) 0:01:45.328 ****** 2025-09-19 00:50:24.967037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 00:50:24.967045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 00:50:24.967057 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.967064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 00:50:24.967070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 00:50:24.967078 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.967358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 00:50:24.967372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 00:50:24.967378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967383 | orchestrator | 2025-09-19 00:50:24.967389 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-19 00:50:24.967395 | orchestrator | Friday 19 September 2025 00:45:49 +0000 (0:00:01.312) 0:01:46.640 ****** 2025-09-19 00:50:24.967400 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.967404 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.967409 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.967414 | orchestrator | 2025-09-19 00:50:24.967418 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-19 00:50:24.967423 | orchestrator | Friday 19 September 2025 00:45:51 +0000 (0:00:01.336) 0:01:47.976 ****** 2025-09-19 00:50:24.967428 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.967432 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.967437 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.967442 | orchestrator | 2025-09-19 00:50:24.967446 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-19 00:50:24.967451 | orchestrator | Friday 19 September 2025 00:45:53 +0000 (0:00:02.070) 0:01:50.046 ****** 2025-09-19 00:50:24.967456 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.967460 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.967465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967470 | orchestrator | 2025-09-19 00:50:24.967474 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-19 00:50:24.967479 | orchestrator | Friday 19 September 2025 00:45:53 +0000 (0:00:00.332) 0:01:50.379 ****** 2025-09-19 00:50:24.967484 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.967489 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.967493 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967497 | orchestrator | 2025-09-19 00:50:24.967501 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-19 00:50:24.967506 | orchestrator | Friday 19 September 2025 00:45:54 +0000 (0:00:00.548) 0:01:50.927 ****** 2025-09-19 00:50:24.967510 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.967514 | orchestrator | 2025-09-19 00:50:24.967518 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-19 00:50:24.967522 | orchestrator | Friday 19 September 2025 00:45:55 +0000 (0:00:00.777) 0:01:51.704 ****** 2025-09-19 00:50:24.967528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:50:24.967542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:50:24.967546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:50:24.967591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:50:24.967605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:50:24.967654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:50:24.967659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967685 | orchestrator | 2025-09-19 00:50:24.967690 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-19 00:50:24.967694 | orchestrator | Friday 19 September 2025 00:45:59 +0000 (0:00:04.834) 0:01:56.539 ****** 2025-09-19 00:50:24.967707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:50:24.967712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:50:24.967716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:50:24.967754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.967762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:50:24.967772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967810 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.967815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:50:24.967822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:50:24.967826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.967859 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967866 | orchestrator | 2025-09-19 00:50:24.967870 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-19 00:50:24.967874 | orchestrator | Friday 19 September 2025 00:46:01 +0000 (0:00:01.346) 0:01:57.885 ****** 2025-09-19 00:50:24.967879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 00:50:24.967883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 00:50:24.967887 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.967891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 00:50:24.967896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 00:50:24.967900 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.967904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 00:50:24.967908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 00:50:24.967912 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967916 | orchestrator | 2025-09-19 00:50:24.967920 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-19 00:50:24.967924 | orchestrator | Friday 19 September 2025 00:46:02 +0000 (0:00:01.012) 0:01:58.898 ****** 2025-09-19 00:50:24.967929 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.967933 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.967937 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.967941 | orchestrator | 2025-09-19 00:50:24.967945 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-19 00:50:24.967949 | orchestrator | Friday 19 September 2025 00:46:03 +0000 (0:00:01.433) 0:02:00.331 ****** 2025-09-19 00:50:24.967953 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.967957 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.967961 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.967965 | orchestrator | 2025-09-19 00:50:24.967969 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-19 00:50:24.967975 | orchestrator | Friday 19 September 2025 00:46:05 +0000 (0:00:02.099) 0:02:02.431 ****** 2025-09-19 00:50:24.967979 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.967983 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.967987 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.967991 | orchestrator | 2025-09-19 00:50:24.967996 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-19 00:50:24.968000 | orchestrator | Friday 19 September 2025 00:46:06 +0000 (0:00:00.504) 0:02:02.936 ****** 2025-09-19 00:50:24.968004 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.968008 | orchestrator | 2025-09-19 00:50:24.968012 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-19 00:50:24.968016 | orchestrator | Friday 19 September 2025 00:46:07 +0000 (0:00:00.807) 0:02:03.744 ****** 2025-09-19 00:50:24.968031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 00:50:24.968041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.968054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 00:50:24.968062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.968079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 00:50:24.968087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.968109 | orchestrator | 2025-09-19 00:50:24.968114 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-19 00:50:24.968118 | orchestrator | Friday 19 September 2025 00:46:11 +0000 (0:00:04.093) 0:02:07.837 ****** 2025-09-19 00:50:24.968135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 00:50:24.968143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.968148 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.968154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 00:50:24.968170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.968175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.968182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 00:50:24.968196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.968203 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.968207 | orchestrator | 2025-09-19 00:50:24.968211 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-19 00:50:24.968215 | orchestrator | Friday 19 September 2025 00:46:14 +0000 (0:00:03.046) 0:02:10.883 ****** 2025-09-19 00:50:24.968219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 00:50:24.968223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 00:50:24.968228 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.968231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 00:50:24.968238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 00:50:24.968245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.968249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 00:50:24.968262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 00:50:24.968266 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.968270 | orchestrator | 2025-09-19 00:50:24.968274 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-19 00:50:24.968277 | orchestrator | Friday 19 September 2025 00:46:17 +0000 (0:00:03.190) 0:02:14.074 ****** 2025-09-19 00:50:24.968283 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.968289 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.968296 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.968302 | orchestrator | 2025-09-19 00:50:24.968524 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-19 00:50:24.968528 | orchestrator | Friday 19 September 2025 00:46:18 +0000 (0:00:01.384) 0:02:15.458 ****** 2025-09-19 00:50:24.968532 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.968536 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.968540 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.968543 | orchestrator | 2025-09-19 00:50:24.968547 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-19 00:50:24.968551 | orchestrator | Friday 19 September 2025 00:46:20 +0000 (0:00:02.003) 0:02:17.461 ****** 2025-09-19 00:50:24.968555 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.968558 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.968562 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.968566 | orchestrator | 2025-09-19 00:50:24.968570 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-19 00:50:24.968576 | orchestrator | Friday 19 September 2025 00:46:21 +0000 (0:00:00.537) 0:02:17.998 ****** 2025-09-19 00:50:24.968582 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.968588 | orchestrator | 2025-09-19 00:50:24.968594 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-19 00:50:24.968600 | orchestrator | Friday 19 September 2025 00:46:22 +0000 (0:00:00.867) 0:02:18.865 ****** 2025-09-19 00:50:24.968608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 00:50:24.968626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 00:50:24.968637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 00:50:24.968644 | orchestrator | 2025-09-19 00:50:24.968650 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-19 00:50:24.968656 | orchestrator | Friday 19 September 2025 00:46:25 +0000 (0:00:03.555) 0:02:22.421 ****** 2025-09-19 00:50:24.968714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 00:50:24.968721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 00:50:24.968725 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.968729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.968732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 00:50:24.968737 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.968745 | orchestrator | 2025-09-19 00:50:24.968749 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-19 00:50:24.968753 | orchestrator | Friday 19 September 2025 00:46:26 +0000 (0:00:00.651) 0:02:23.072 ****** 2025-09-19 00:50:24.968757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 00:50:24.968761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 00:50:24.968766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 00:50:24.968770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.968773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 00:50:24.968777 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.968784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 00:50:24.968788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 00:50:24.968792 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.968795 | orchestrator | 2025-09-19 00:50:24.968799 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-19 00:50:24.968803 | orchestrator | Friday 19 September 2025 00:46:27 +0000 (0:00:00.660) 0:02:23.733 ****** 2025-09-19 00:50:24.968807 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.968810 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.968814 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.968818 | orchestrator | 2025-09-19 00:50:24.968821 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-19 00:50:24.968825 | orchestrator | Friday 19 September 2025 00:46:28 +0000 (0:00:01.331) 0:02:25.065 ****** 2025-09-19 00:50:24.968829 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.968833 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.968836 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.968840 | orchestrator | 2025-09-19 00:50:24.968854 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-19 00:50:24.968858 | orchestrator | Friday 19 September 2025 00:46:30 +0000 (0:00:02.072) 0:02:27.138 ****** 2025-09-19 00:50:24.968862 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.968869 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.968876 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.968882 | orchestrator | 2025-09-19 00:50:24.968889 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-19 00:50:24.968895 | orchestrator | Friday 19 September 2025 00:46:31 +0000 (0:00:00.552) 0:02:27.690 ****** 2025-09-19 00:50:24.968902 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.968909 | orchestrator | 2025-09-19 00:50:24.968916 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-19 00:50:24.968920 | orchestrator | Friday 19 September 2025 00:46:32 +0000 (0:00:01.002) 0:02:28.693 ****** 2025-09-19 00:50:24.968925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:50:24.968949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:50:24.968959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:50:24.968964 | orchestrator | 2025-09-19 00:50:24.968970 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-19 00:50:24.968974 | orchestrator | Friday 19 September 2025 00:46:37 +0000 (0:00:04.974) 0:02:33.667 ****** 2025-09-19 00:50:24.968987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:50:24.968996 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.969003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:50:24.969007 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.969020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:50:24.969030 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.969034 | orchestrator | 2025-09-19 00:50:24.969037 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-19 00:50:24.969079 | orchestrator | Friday 19 September 2025 00:46:38 +0000 (0:00:01.488) 0:02:35.155 ****** 2025-09-19 00:50:24.969084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 00:50:24.969088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 00:50:24.969285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 00:50:24.969308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 00:50:24.969313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 00:50:24.969317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 00:50:24.969335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 00:50:24.969465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 00:50:24.969477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 00:50:24.969481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 00:50:24.969485 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.969488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 00:50:24.969492 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.969496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 00:50:24.969500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 00:50:24.969504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 00:50:24.969508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 00:50:24.969511 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.969515 | orchestrator | 2025-09-19 00:50:24.969520 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-19 00:50:24.969523 | orchestrator | Friday 19 September 2025 00:46:39 +0000 (0:00:01.251) 0:02:36.406 ****** 2025-09-19 00:50:24.969527 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.969531 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.969535 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.969538 | orchestrator | 2025-09-19 00:50:24.969542 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-19 00:50:24.969546 | orchestrator | Friday 19 September 2025 00:46:41 +0000 (0:00:01.503) 0:02:37.909 ****** 2025-09-19 00:50:24.969549 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.969553 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.969557 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.969560 | orchestrator | 2025-09-19 00:50:24.969564 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-19 00:50:24.969568 | orchestrator | Friday 19 September 2025 00:46:43 +0000 (0:00:02.225) 0:02:40.134 ****** 2025-09-19 00:50:24.969574 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.969578 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.969638 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.969657 | orchestrator | 2025-09-19 00:50:24.969661 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-19 00:50:24.969674 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.548) 0:02:40.683 ****** 2025-09-19 00:50:24.969678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.969682 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.969690 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.969694 | orchestrator | 2025-09-19 00:50:24.969704 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-19 00:50:24.969708 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.355) 0:02:41.038 ****** 2025-09-19 00:50:24.969711 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.969715 | orchestrator | 2025-09-19 00:50:24.969719 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-19 00:50:24.969722 | orchestrator | Friday 19 September 2025 00:46:45 +0000 (0:00:01.234) 0:02:42.273 ****** 2025-09-19 00:50:24.969743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:50:24.969748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:50:24.969753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:50:24.969760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:50:24.969779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:50:24.969784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:50:24.969788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:50:24.969792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:50:24.969797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:50:24.969800 | orchestrator | 2025-09-19 00:50:24.969804 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-19 00:50:24.969808 | orchestrator | Friday 19 September 2025 00:46:49 +0000 (0:00:03.509) 0:02:45.782 ****** 2025-09-19 00:50:24.969815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:50:24.969833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:50:24.969837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:50:24.969841 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.969846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:50:24.969850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:50:24.969860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:50:24.969864 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.969903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:50:24.969909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:50:24.969913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:50:24.969917 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.969921 | orchestrator | 2025-09-19 00:50:24.969925 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-19 00:50:24.969928 | orchestrator | Friday 19 September 2025 00:46:49 +0000 (0:00:00.637) 0:02:46.420 ****** 2025-09-19 00:50:24.969933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 00:50:24.969937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 00:50:24.969941 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.969944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 00:50:24.970029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 00:50:24.970035 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.970038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 00:50:24.970045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 00:50:24.970049 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.970053 | orchestrator | 2025-09-19 00:50:24.970057 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-19 00:50:24.970060 | orchestrator | Friday 19 September 2025 00:46:50 +0000 (0:00:00.815) 0:02:47.235 ****** 2025-09-19 00:50:24.970200 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.970208 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.970212 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.970216 | orchestrator | 2025-09-19 00:50:24.970220 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-19 00:50:24.970224 | orchestrator | Friday 19 September 2025 00:46:52 +0000 (0:00:01.640) 0:02:48.875 ****** 2025-09-19 00:50:24.970227 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.970231 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.970235 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.970239 | orchestrator | 2025-09-19 00:50:24.970242 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-19 00:50:24.970293 | orchestrator | Friday 19 September 2025 00:46:54 +0000 (0:00:02.100) 0:02:50.976 ****** 2025-09-19 00:50:24.970299 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.970303 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.970306 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.970310 | orchestrator | 2025-09-19 00:50:24.970314 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-19 00:50:24.970318 | orchestrator | Friday 19 September 2025 00:46:54 +0000 (0:00:00.308) 0:02:51.285 ****** 2025-09-19 00:50:24.970322 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.970326 | orchestrator | 2025-09-19 00:50:24.970329 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-19 00:50:24.970333 | orchestrator | Friday 19 September 2025 00:46:55 +0000 (0:00:01.129) 0:02:52.414 ****** 2025-09-19 00:50:24.970337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 00:50:24.970347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 00:50:24.970368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 00:50:24.970389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970396 | orchestrator | 2025-09-19 00:50:24.970400 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-19 00:50:24.970404 | orchestrator | Friday 19 September 2025 00:46:59 +0000 (0:00:03.478) 0:02:55.893 ****** 2025-09-19 00:50:24.970408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 00:50:24.970416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970420 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.970434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 00:50:24.970438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970442 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.970449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 00:50:24.970453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970457 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.970461 | orchestrator | 2025-09-19 00:50:24.970464 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-19 00:50:24.970468 | orchestrator | Friday 19 September 2025 00:46:59 +0000 (0:00:00.739) 0:02:56.632 ****** 2025-09-19 00:50:24.970472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 00:50:24.970479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 00:50:24.970483 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.970487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 00:50:24.970490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 00:50:24.970494 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.970498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 00:50:24.970502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 00:50:24.970515 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.970519 | orchestrator | 2025-09-19 00:50:24.970523 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-19 00:50:24.970527 | orchestrator | Friday 19 September 2025 00:47:01 +0000 (0:00:01.083) 0:02:57.716 ****** 2025-09-19 00:50:24.970530 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.970534 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.970538 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.970541 | orchestrator | 2025-09-19 00:50:24.970549 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-19 00:50:24.970553 | orchestrator | Friday 19 September 2025 00:47:02 +0000 (0:00:01.688) 0:02:59.404 ****** 2025-09-19 00:50:24.970557 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.970560 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.970564 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.970568 | orchestrator | 2025-09-19 00:50:24.970571 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-19 00:50:24.970575 | orchestrator | Friday 19 September 2025 00:47:05 +0000 (0:00:02.414) 0:03:01.819 ****** 2025-09-19 00:50:24.970579 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.970609 | orchestrator | 2025-09-19 00:50:24.970613 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-19 00:50:24.970617 | orchestrator | Friday 19 September 2025 00:47:06 +0000 (0:00:01.161) 0:03:02.981 ****** 2025-09-19 00:50:24.970621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 00:50:24.970625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 00:50:24.970675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 00:50:24.970746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970784 | orchestrator | 2025-09-19 00:50:24.970788 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-19 00:50:24.970804 | orchestrator | Friday 19 September 2025 00:47:10 +0000 (0:00:04.003) 0:03:06.984 ****** 2025-09-19 00:50:24.970809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 00:50:24.970813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970932 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.970967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 00:50:24.970973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.970986 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.970994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 00:50:24.971053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.971059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.971064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.971068 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.971072 | orchestrator | 2025-09-19 00:50:24.971076 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-19 00:50:24.971080 | orchestrator | Friday 19 September 2025 00:47:11 +0000 (0:00:01.009) 0:03:07.994 ****** 2025-09-19 00:50:24.971119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 00:50:24.971124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 00:50:24.971129 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.971133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 00:50:24.971137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 00:50:24.971142 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.971146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 00:50:24.971150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 00:50:24.971154 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.971190 | orchestrator | 2025-09-19 00:50:24.971195 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-19 00:50:24.971199 | orchestrator | Friday 19 September 2025 00:47:12 +0000 (0:00:00.919) 0:03:08.913 ****** 2025-09-19 00:50:24.971203 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.971209 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.971213 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.971217 | orchestrator | 2025-09-19 00:50:24.971221 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-19 00:50:24.971224 | orchestrator | Friday 19 September 2025 00:47:13 +0000 (0:00:01.323) 0:03:10.237 ****** 2025-09-19 00:50:24.971228 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.971232 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.971270 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.971274 | orchestrator | 2025-09-19 00:50:24.971278 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-19 00:50:24.971282 | orchestrator | Friday 19 September 2025 00:47:15 +0000 (0:00:02.319) 0:03:12.556 ****** 2025-09-19 00:50:24.971285 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.971289 | orchestrator | 2025-09-19 00:50:24.971293 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-19 00:50:24.971297 | orchestrator | Friday 19 September 2025 00:47:17 +0000 (0:00:01.308) 0:03:13.865 ****** 2025-09-19 00:50:24.971301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:50:24.971305 | orchestrator | 2025-09-19 00:50:24.971308 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-19 00:50:24.971312 | orchestrator | Friday 19 September 2025 00:47:20 +0000 (0:00:02.970) 0:03:16.836 ****** 2025-09-19 00:50:24.971329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:50:24.971615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 00:50:24.971740 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.971775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:50:24.971781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 00:50:24.971785 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.971789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:50:24.971801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 00:50:24.971805 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972034 | orchestrator | 2025-09-19 00:50:24.972039 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-19 00:50:24.972043 | orchestrator | Friday 19 September 2025 00:47:22 +0000 (0:00:02.329) 0:03:19.166 ****** 2025-09-19 00:50:24.972070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:50:24.972076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 00:50:24.972085 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:50:24.972153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 00:50:24.972159 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:50:24.972171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 00:50:24.972175 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972178 | orchestrator | 2025-09-19 00:50:24.972182 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-19 00:50:24.972186 | orchestrator | Friday 19 September 2025 00:47:24 +0000 (0:00:02.346) 0:03:21.512 ****** 2025-09-19 00:50:24.972193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 00:50:24.972214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 00:50:24.972219 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 00:50:24.972227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 00:50:24.972231 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 00:50:24.972241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 00:50:24.972245 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972249 | orchestrator | 2025-09-19 00:50:24.972253 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-19 00:50:24.972256 | orchestrator | Friday 19 September 2025 00:47:27 +0000 (0:00:02.401) 0:03:23.913 ****** 2025-09-19 00:50:24.972260 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.972264 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.972267 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.972271 | orchestrator | 2025-09-19 00:50:24.972275 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-19 00:50:24.972278 | orchestrator | Friday 19 September 2025 00:47:29 +0000 (0:00:02.174) 0:03:26.088 ****** 2025-09-19 00:50:24.972282 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972286 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972292 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972296 | orchestrator | 2025-09-19 00:50:24.972299 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-19 00:50:24.972303 | orchestrator | Friday 19 September 2025 00:47:30 +0000 (0:00:01.492) 0:03:27.581 ****** 2025-09-19 00:50:24.972307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972311 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972314 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972318 | orchestrator | 2025-09-19 00:50:24.972357 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-19 00:50:24.972362 | orchestrator | Friday 19 September 2025 00:47:31 +0000 (0:00:00.573) 0:03:28.154 ****** 2025-09-19 00:50:24.972366 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.972370 | orchestrator | 2025-09-19 00:50:24.972374 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-19 00:50:24.972379 | orchestrator | Friday 19 September 2025 00:47:32 +0000 (0:00:01.102) 0:03:29.257 ****** 2025-09-19 00:50:24.972406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 00:50:24.972461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 00:50:24.972466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 00:50:24.972470 | orchestrator | 2025-09-19 00:50:24.972474 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-19 00:50:24.972478 | orchestrator | Friday 19 September 2025 00:47:34 +0000 (0:00:01.488) 0:03:30.745 ****** 2025-09-19 00:50:24.972481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 00:50:24.972485 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 00:50:24.972496 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 00:50:24.972527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972531 | orchestrator | 2025-09-19 00:50:24.972535 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-19 00:50:24.972538 | orchestrator | Friday 19 September 2025 00:47:34 +0000 (0:00:00.731) 0:03:31.477 ****** 2025-09-19 00:50:24.972543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 00:50:24.972547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 00:50:24.972551 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972554 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 00:50:24.972831 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972842 | orchestrator | 2025-09-19 00:50:24.972846 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-19 00:50:24.972850 | orchestrator | Friday 19 September 2025 00:47:35 +0000 (0:00:00.644) 0:03:32.121 ****** 2025-09-19 00:50:24.972854 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972857 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972861 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972865 | orchestrator | 2025-09-19 00:50:24.972869 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-19 00:50:24.972872 | orchestrator | Friday 19 September 2025 00:47:35 +0000 (0:00:00.415) 0:03:32.536 ****** 2025-09-19 00:50:24.972876 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972880 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972884 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972887 | orchestrator | 2025-09-19 00:50:24.972891 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-19 00:50:24.972895 | orchestrator | Friday 19 September 2025 00:47:37 +0000 (0:00:01.298) 0:03:33.835 ****** 2025-09-19 00:50:24.972898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.972902 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.972906 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.972910 | orchestrator | 2025-09-19 00:50:24.972913 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-19 00:50:24.972917 | orchestrator | Friday 19 September 2025 00:47:37 +0000 (0:00:00.555) 0:03:34.390 ****** 2025-09-19 00:50:24.972921 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.972925 | orchestrator | 2025-09-19 00:50:24.972928 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-19 00:50:24.972932 | orchestrator | Friday 19 September 2025 00:47:38 +0000 (0:00:01.197) 0:03:35.587 ****** 2025-09-19 00:50:24.972939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 00:50:24.972991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.972997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 00:50:24.973012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 00:50:24.973044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 00:50:24.973136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 00:50:24.973151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.973258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 00:50:24.973277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.973410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.973508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973517 | orchestrator | 2025-09-19 00:50:24.973521 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-19 00:50:24.973528 | orchestrator | Friday 19 September 2025 00:47:43 +0000 (0:00:04.459) 0:03:40.047 ****** 2025-09-19 00:50:24.973532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 00:50:24.973539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 00:50:24.973588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 00:50:24.973647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.973755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 00:50:24.973807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973820 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.973824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 00:50:24.973831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 00:50:24.973914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.973970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.973986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.973991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.973998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.974002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 00:50:24.974053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974057 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 00:50:24.974065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 00:50:24.974077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 00:50:24.974106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974126 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974130 | orchestrator | 2025-09-19 00:50:24.974134 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-19 00:50:24.974138 | orchestrator | Friday 19 September 2025 00:47:44 +0000 (0:00:01.581) 0:03:41.629 ****** 2025-09-19 00:50:24.974143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 00:50:24.974147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 00:50:24.974151 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 00:50:24.974159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 00:50:24.974162 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 00:50:24.974170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 00:50:24.974174 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974177 | orchestrator | 2025-09-19 00:50:24.974181 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-19 00:50:24.974185 | orchestrator | Friday 19 September 2025 00:47:46 +0000 (0:00:01.522) 0:03:43.151 ****** 2025-09-19 00:50:24.974188 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974192 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974196 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974200 | orchestrator | 2025-09-19 00:50:24.974203 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-19 00:50:24.974207 | orchestrator | Friday 19 September 2025 00:47:48 +0000 (0:00:01.962) 0:03:45.113 ****** 2025-09-19 00:50:24.974211 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974214 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974218 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974222 | orchestrator | 2025-09-19 00:50:24.974225 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-19 00:50:24.974229 | orchestrator | Friday 19 September 2025 00:47:50 +0000 (0:00:02.156) 0:03:47.270 ****** 2025-09-19 00:50:24.974233 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.974236 | orchestrator | 2025-09-19 00:50:24.974240 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-19 00:50:24.974244 | orchestrator | Friday 19 September 2025 00:47:51 +0000 (0:00:01.196) 0:03:48.467 ****** 2025-09-19 00:50:24.974250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.974277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.974282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.974286 | orchestrator | 2025-09-19 00:50:24.974290 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-19 00:50:24.974293 | orchestrator | Friday 19 September 2025 00:47:55 +0000 (0:00:03.313) 0:03:51.781 ****** 2025-09-19 00:50:24.974297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.974301 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.974315 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.974334 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974338 | orchestrator | 2025-09-19 00:50:24.974342 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-19 00:50:24.974345 | orchestrator | Friday 19 September 2025 00:47:55 +0000 (0:00:00.867) 0:03:52.648 ****** 2025-09-19 00:50:24.974349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974357 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974380 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974384 | orchestrator | 2025-09-19 00:50:24.974387 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-19 00:50:24.974391 | orchestrator | Friday 19 September 2025 00:47:56 +0000 (0:00:00.766) 0:03:53.415 ****** 2025-09-19 00:50:24.974395 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974401 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974405 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974408 | orchestrator | 2025-09-19 00:50:24.974412 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-19 00:50:24.974416 | orchestrator | Friday 19 September 2025 00:47:58 +0000 (0:00:01.319) 0:03:54.734 ****** 2025-09-19 00:50:24.974419 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974423 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974427 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974430 | orchestrator | 2025-09-19 00:50:24.974434 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-19 00:50:24.974438 | orchestrator | Friday 19 September 2025 00:48:00 +0000 (0:00:02.080) 0:03:56.815 ****** 2025-09-19 00:50:24.974442 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.974445 | orchestrator | 2025-09-19 00:50:24.974451 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-19 00:50:24.974455 | orchestrator | Friday 19 September 2025 00:48:01 +0000 (0:00:01.486) 0:03:58.302 ****** 2025-09-19 00:50:24.974471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.974476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.974493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.974513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974528 | orchestrator | 2025-09-19 00:50:24.974531 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-19 00:50:24.974535 | orchestrator | Friday 19 September 2025 00:48:06 +0000 (0:00:04.355) 0:04:02.658 ****** 2025-09-19 00:50:24.974541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.974556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974565 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.974576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974586 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.974607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.974619 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974623 | orchestrator | 2025-09-19 00:50:24.974627 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-19 00:50:24.974632 | orchestrator | Friday 19 September 2025 00:48:06 +0000 (0:00:00.636) 0:04:03.294 ****** 2025-09-19 00:50:24.974636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974654 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974679 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 00:50:24.974712 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974716 | orchestrator | 2025-09-19 00:50:24.974723 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-19 00:50:24.974727 | orchestrator | Friday 19 September 2025 00:48:07 +0000 (0:00:01.333) 0:04:04.627 ****** 2025-09-19 00:50:24.974732 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974736 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974740 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974744 | orchestrator | 2025-09-19 00:50:24.974748 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-19 00:50:24.974752 | orchestrator | Friday 19 September 2025 00:48:09 +0000 (0:00:01.420) 0:04:06.048 ****** 2025-09-19 00:50:24.974757 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974761 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974765 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974769 | orchestrator | 2025-09-19 00:50:24.974773 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-19 00:50:24.974777 | orchestrator | Friday 19 September 2025 00:48:11 +0000 (0:00:02.048) 0:04:08.096 ****** 2025-09-19 00:50:24.974782 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.974786 | orchestrator | 2025-09-19 00:50:24.974790 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-19 00:50:24.974794 | orchestrator | Friday 19 September 2025 00:48:12 +0000 (0:00:01.513) 0:04:09.610 ****** 2025-09-19 00:50:24.974799 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-19 00:50:24.974803 | orchestrator | 2025-09-19 00:50:24.974807 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-19 00:50:24.974811 | orchestrator | Friday 19 September 2025 00:48:13 +0000 (0:00:00.808) 0:04:10.419 ****** 2025-09-19 00:50:24.974816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 00:50:24.974823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 00:50:24.974828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 00:50:24.974832 | orchestrator | 2025-09-19 00:50:24.974836 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-19 00:50:24.974841 | orchestrator | Friday 19 September 2025 00:48:18 +0000 (0:00:04.336) 0:04:14.756 ****** 2025-09-19 00:50:24.974856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.974865 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.974874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.974883 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974887 | orchestrator | 2025-09-19 00:50:24.974891 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-19 00:50:24.974896 | orchestrator | Friday 19 September 2025 00:48:19 +0000 (0:00:01.425) 0:04:16.181 ****** 2025-09-19 00:50:24.974900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 00:50:24.974905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 00:50:24.974909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.974914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 00:50:24.974918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 00:50:24.974923 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.974927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 00:50:24.974931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 00:50:24.974938 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.974942 | orchestrator | 2025-09-19 00:50:24.974946 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 00:50:24.974951 | orchestrator | Friday 19 September 2025 00:48:21 +0000 (0:00:01.582) 0:04:17.764 ****** 2025-09-19 00:50:24.974955 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974960 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974964 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974968 | orchestrator | 2025-09-19 00:50:24.974975 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 00:50:24.974979 | orchestrator | Friday 19 September 2025 00:48:23 +0000 (0:00:02.598) 0:04:20.363 ****** 2025-09-19 00:50:24.974983 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.974986 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.974990 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.974994 | orchestrator | 2025-09-19 00:50:24.974998 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-19 00:50:24.975001 | orchestrator | Friday 19 September 2025 00:48:26 +0000 (0:00:03.012) 0:04:23.376 ****** 2025-09-19 00:50:24.975005 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-19 00:50:24.975009 | orchestrator | 2025-09-19 00:50:24.975013 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-19 00:50:24.975027 | orchestrator | Friday 19 September 2025 00:48:28 +0000 (0:00:01.360) 0:04:24.736 ****** 2025-09-19 00:50:24.975032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.975036 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.975044 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.975051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975055 | orchestrator | 2025-09-19 00:50:24.975059 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-19 00:50:24.975063 | orchestrator | Friday 19 September 2025 00:48:29 +0000 (0:00:01.313) 0:04:26.049 ****** 2025-09-19 00:50:24.975067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.975071 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.975083 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 00:50:24.975091 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975130 | orchestrator | 2025-09-19 00:50:24.975134 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-19 00:50:24.975138 | orchestrator | Friday 19 September 2025 00:48:30 +0000 (0:00:01.353) 0:04:27.403 ****** 2025-09-19 00:50:24.975142 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975145 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975153 | orchestrator | 2025-09-19 00:50:24.975157 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 00:50:24.975173 | orchestrator | Friday 19 September 2025 00:48:32 +0000 (0:00:01.818) 0:04:29.221 ****** 2025-09-19 00:50:24.975177 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.975181 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.975185 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.975188 | orchestrator | 2025-09-19 00:50:24.975192 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 00:50:24.975196 | orchestrator | Friday 19 September 2025 00:48:34 +0000 (0:00:02.419) 0:04:31.640 ****** 2025-09-19 00:50:24.975199 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.975203 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.975207 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.975210 | orchestrator | 2025-09-19 00:50:24.975214 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-19 00:50:24.975218 | orchestrator | Friday 19 September 2025 00:48:38 +0000 (0:00:03.013) 0:04:34.654 ****** 2025-09-19 00:50:24.975222 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-19 00:50:24.975225 | orchestrator | 2025-09-19 00:50:24.975229 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-19 00:50:24.975233 | orchestrator | Friday 19 September 2025 00:48:38 +0000 (0:00:00.861) 0:04:35.516 ****** 2025-09-19 00:50:24.975237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 00:50:24.975241 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 00:50:24.975254 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 00:50:24.975262 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975265 | orchestrator | 2025-09-19 00:50:24.975269 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-19 00:50:24.975273 | orchestrator | Friday 19 September 2025 00:48:40 +0000 (0:00:01.338) 0:04:36.854 ****** 2025-09-19 00:50:24.975279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 00:50:24.975283 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 00:50:24.975291 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 00:50:24.975310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975314 | orchestrator | 2025-09-19 00:50:24.975318 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-19 00:50:24.975321 | orchestrator | Friday 19 September 2025 00:48:41 +0000 (0:00:01.345) 0:04:38.200 ****** 2025-09-19 00:50:24.975325 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975329 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975332 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975336 | orchestrator | 2025-09-19 00:50:24.975340 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 00:50:24.975343 | orchestrator | Friday 19 September 2025 00:48:42 +0000 (0:00:01.418) 0:04:39.619 ****** 2025-09-19 00:50:24.975347 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.975351 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.975354 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.975358 | orchestrator | 2025-09-19 00:50:24.975362 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 00:50:24.975369 | orchestrator | Friday 19 September 2025 00:48:46 +0000 (0:00:03.330) 0:04:42.949 ****** 2025-09-19 00:50:24.975373 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.975377 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.975380 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.975384 | orchestrator | 2025-09-19 00:50:24.975388 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-19 00:50:24.975392 | orchestrator | Friday 19 September 2025 00:48:49 +0000 (0:00:03.030) 0:04:45.980 ****** 2025-09-19 00:50:24.975395 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.975399 | orchestrator | 2025-09-19 00:50:24.975403 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-19 00:50:24.975406 | orchestrator | Friday 19 September 2025 00:48:51 +0000 (0:00:01.686) 0:04:47.667 ****** 2025-09-19 00:50:24.975410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.975415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 00:50:24.975421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.975440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.975451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 00:50:24.975455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.975480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.975487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 00:50:24.975491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.975504 | orchestrator | 2025-09-19 00:50:24.975508 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-19 00:50:24.975512 | orchestrator | Friday 19 September 2025 00:48:54 +0000 (0:00:03.941) 0:04:51.608 ****** 2025-09-19 00:50:24.975527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.975535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 00:50:24.975539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.975550 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.975571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 00:50:24.975579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.975591 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.975601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 00:50:24.975616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 00:50:24.975627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:50:24.975631 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975635 | orchestrator | 2025-09-19 00:50:24.975639 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-19 00:50:24.975643 | orchestrator | Friday 19 September 2025 00:48:56 +0000 (0:00:01.067) 0:04:52.676 ****** 2025-09-19 00:50:24.975646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 00:50:24.975650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 00:50:24.975654 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 00:50:24.975662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 00:50:24.975665 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 00:50:24.975673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 00:50:24.975677 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975681 | orchestrator | 2025-09-19 00:50:24.975687 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-19 00:50:24.975691 | orchestrator | Friday 19 September 2025 00:48:57 +0000 (0:00:01.249) 0:04:53.925 ****** 2025-09-19 00:50:24.975694 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.975698 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.975704 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.975708 | orchestrator | 2025-09-19 00:50:24.975712 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-19 00:50:24.975715 | orchestrator | Friday 19 September 2025 00:48:58 +0000 (0:00:01.347) 0:04:55.272 ****** 2025-09-19 00:50:24.975719 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.975723 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.975726 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.975730 | orchestrator | 2025-09-19 00:50:24.975734 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-19 00:50:24.975738 | orchestrator | Friday 19 September 2025 00:49:00 +0000 (0:00:02.122) 0:04:57.395 ****** 2025-09-19 00:50:24.975741 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.975745 | orchestrator | 2025-09-19 00:50:24.975749 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-19 00:50:24.975752 | orchestrator | Friday 19 September 2025 00:49:02 +0000 (0:00:01.608) 0:04:59.004 ****** 2025-09-19 00:50:24.975767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:50:24.975772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:50:24.975776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:50:24.975782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:50:24.975800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:50:24.975806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:50:24.975810 | orchestrator | 2025-09-19 00:50:24.975813 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-19 00:50:24.975817 | orchestrator | Friday 19 September 2025 00:49:07 +0000 (0:00:05.154) 0:05:04.159 ****** 2025-09-19 00:50:24.975821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:50:24.975831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:50:24.975836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:50:24.975854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:50:24.975859 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:50:24.975872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:50:24.975876 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975880 | orchestrator | 2025-09-19 00:50:24.975884 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-19 00:50:24.975887 | orchestrator | Friday 19 September 2025 00:49:08 +0000 (0:00:00.640) 0:05:04.800 ****** 2025-09-19 00:50:24.975891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 00:50:24.975905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 00:50:24.975910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 00:50:24.975914 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 00:50:24.975921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 00:50:24.975925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 00:50:24.975929 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 00:50:24.975936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 00:50:24.975940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 00:50:24.975944 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975948 | orchestrator | 2025-09-19 00:50:24.975952 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-19 00:50:24.975959 | orchestrator | Friday 19 September 2025 00:49:09 +0000 (0:00:01.661) 0:05:06.461 ****** 2025-09-19 00:50:24.975963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975966 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975970 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975974 | orchestrator | 2025-09-19 00:50:24.975977 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-19 00:50:24.975981 | orchestrator | Friday 19 September 2025 00:49:10 +0000 (0:00:00.465) 0:05:06.927 ****** 2025-09-19 00:50:24.975985 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.975989 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.975992 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.975996 | orchestrator | 2025-09-19 00:50:24.975999 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-19 00:50:24.976003 | orchestrator | Friday 19 September 2025 00:49:11 +0000 (0:00:01.352) 0:05:08.279 ****** 2025-09-19 00:50:24.976007 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.976011 | orchestrator | 2025-09-19 00:50:24.976014 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-19 00:50:24.976018 | orchestrator | Friday 19 September 2025 00:49:13 +0000 (0:00:01.670) 0:05:09.950 ****** 2025-09-19 00:50:24.976024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 00:50:24.976028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 00:50:24.976043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 00:50:24.976055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 00:50:24.976069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 00:50:24.976105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 00:50:24.976117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 00:50:24.976135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 00:50:24.976142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 00:50:24.976168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 00:50:24.976180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 00:50:24.976201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 00:50:24.976207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976222 | orchestrator | 2025-09-19 00:50:24.976225 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-19 00:50:24.976229 | orchestrator | Friday 19 September 2025 00:49:17 +0000 (0:00:04.175) 0:05:14.125 ****** 2025-09-19 00:50:24.976233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 00:50:24.976237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 00:50:24.976243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 00:50:24.976267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 00:50:24.976271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 00:50:24.976297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 00:50:24.976301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 00:50:24.976321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 00:50:24.976328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 00:50:24.976332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 00:50:24.976336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976366 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 00:50:24.976374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 00:50:24.976381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 00:50:24.976398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 00:50:24.976402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976406 | orchestrator | 2025-09-19 00:50:24.976410 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-19 00:50:24.976414 | orchestrator | Friday 19 September 2025 00:49:18 +0000 (0:00:00.840) 0:05:14.965 ****** 2025-09-19 00:50:24.976417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 00:50:24.976421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 00:50:24.976426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 00:50:24.976432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 00:50:24.976436 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 00:50:24.976444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 00:50:24.976448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 00:50:24.976452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 00:50:24.976455 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 00:50:24.976463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 00:50:24.976469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 00:50:24.976475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 00:50:24.976479 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976483 | orchestrator | 2025-09-19 00:50:24.976487 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-19 00:50:24.976491 | orchestrator | Friday 19 September 2025 00:49:19 +0000 (0:00:01.277) 0:05:16.243 ****** 2025-09-19 00:50:24.976494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976498 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976502 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976505 | orchestrator | 2025-09-19 00:50:24.976509 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-19 00:50:24.976513 | orchestrator | Friday 19 September 2025 00:49:20 +0000 (0:00:00.512) 0:05:16.755 ****** 2025-09-19 00:50:24.976518 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976522 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976526 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976529 | orchestrator | 2025-09-19 00:50:24.976533 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-19 00:50:24.976537 | orchestrator | Friday 19 September 2025 00:49:21 +0000 (0:00:01.319) 0:05:18.075 ****** 2025-09-19 00:50:24.976541 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.976544 | orchestrator | 2025-09-19 00:50:24.976548 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-19 00:50:24.976552 | orchestrator | Friday 19 September 2025 00:49:22 +0000 (0:00:01.398) 0:05:19.474 ****** 2025-09-19 00:50:24.976556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:50:24.976560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:50:24.976568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 00:50:24.976572 | orchestrator | 2025-09-19 00:50:24.976576 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-19 00:50:24.976580 | orchestrator | Friday 19 September 2025 00:49:25 +0000 (0:00:02.693) 0:05:22.168 ****** 2025-09-19 00:50:24.976586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 00:50:24.976590 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 00:50:24.976598 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 00:50:24.976608 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976612 | orchestrator | 2025-09-19 00:50:24.976616 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-19 00:50:24.976619 | orchestrator | Friday 19 September 2025 00:49:25 +0000 (0:00:00.420) 0:05:22.588 ****** 2025-09-19 00:50:24.976623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 00:50:24.976627 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 00:50:24.976636 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 00:50:24.976644 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976648 | orchestrator | 2025-09-19 00:50:24.976651 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-19 00:50:24.976655 | orchestrator | Friday 19 September 2025 00:49:26 +0000 (0:00:00.640) 0:05:23.228 ****** 2025-09-19 00:50:24.976659 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976662 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976666 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976670 | orchestrator | 2025-09-19 00:50:24.976673 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-19 00:50:24.976677 | orchestrator | Friday 19 September 2025 00:49:27 +0000 (0:00:00.875) 0:05:24.104 ****** 2025-09-19 00:50:24.976681 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976685 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976688 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976692 | orchestrator | 2025-09-19 00:50:24.976696 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-19 00:50:24.976701 | orchestrator | Friday 19 September 2025 00:49:28 +0000 (0:00:01.352) 0:05:25.456 ****** 2025-09-19 00:50:24.976705 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:50:24.976709 | orchestrator | 2025-09-19 00:50:24.976712 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-19 00:50:24.976716 | orchestrator | Friday 19 September 2025 00:49:30 +0000 (0:00:01.524) 0:05:26.980 ****** 2025-09-19 00:50:24.976720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.976725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.976733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.976738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.976744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.976748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 00:50:24.976757 | orchestrator | 2025-09-19 00:50:24.976761 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-19 00:50:24.976764 | orchestrator | Friday 19 September 2025 00:49:36 +0000 (0:00:06.423) 0:05:33.404 ****** 2025-09-19 00:50:24.976768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.976774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.976778 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.976788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.976794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.976804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 00:50:24.976808 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976812 | orchestrator | 2025-09-19 00:50:24.976816 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-19 00:50:24.976819 | orchestrator | Friday 19 September 2025 00:49:37 +0000 (0:00:00.642) 0:05:34.046 ****** 2025-09-19 00:50:24.976823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976843 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976862 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 00:50:24.976881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976885 | orchestrator | 2025-09-19 00:50:24.976888 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-19 00:50:24.976892 | orchestrator | Friday 19 September 2025 00:49:38 +0000 (0:00:00.948) 0:05:34.994 ****** 2025-09-19 00:50:24.976896 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.976899 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.976903 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.976907 | orchestrator | 2025-09-19 00:50:24.976910 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-19 00:50:24.976914 | orchestrator | Friday 19 September 2025 00:49:40 +0000 (0:00:02.091) 0:05:37.085 ****** 2025-09-19 00:50:24.976918 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.976922 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.976925 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.976929 | orchestrator | 2025-09-19 00:50:24.976936 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-19 00:50:24.976939 | orchestrator | Friday 19 September 2025 00:49:42 +0000 (0:00:02.224) 0:05:39.310 ****** 2025-09-19 00:50:24.976943 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976947 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976950 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976954 | orchestrator | 2025-09-19 00:50:24.976958 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-19 00:50:24.976961 | orchestrator | Friday 19 September 2025 00:49:42 +0000 (0:00:00.328) 0:05:39.638 ****** 2025-09-19 00:50:24.976965 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976969 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976972 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.976979 | orchestrator | 2025-09-19 00:50:24.976982 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-19 00:50:24.976986 | orchestrator | Friday 19 September 2025 00:49:43 +0000 (0:00:00.305) 0:05:39.944 ****** 2025-09-19 00:50:24.976990 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.976993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.976997 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977001 | orchestrator | 2025-09-19 00:50:24.977005 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-19 00:50:24.977008 | orchestrator | Friday 19 September 2025 00:49:43 +0000 (0:00:00.296) 0:05:40.240 ****** 2025-09-19 00:50:24.977014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977018 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977022 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977026 | orchestrator | 2025-09-19 00:50:24.977029 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-19 00:50:24.977033 | orchestrator | Friday 19 September 2025 00:49:44 +0000 (0:00:00.617) 0:05:40.858 ****** 2025-09-19 00:50:24.977037 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977040 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977044 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977048 | orchestrator | 2025-09-19 00:50:24.977051 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-19 00:50:24.977055 | orchestrator | Friday 19 September 2025 00:49:44 +0000 (0:00:00.316) 0:05:41.175 ****** 2025-09-19 00:50:24.977059 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977062 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977066 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977070 | orchestrator | 2025-09-19 00:50:24.977073 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-19 00:50:24.977077 | orchestrator | Friday 19 September 2025 00:49:45 +0000 (0:00:00.564) 0:05:41.739 ****** 2025-09-19 00:50:24.977081 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977085 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977088 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977139 | orchestrator | 2025-09-19 00:50:24.977143 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-19 00:50:24.977147 | orchestrator | Friday 19 September 2025 00:49:46 +0000 (0:00:01.004) 0:05:42.744 ****** 2025-09-19 00:50:24.977151 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977154 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977158 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977162 | orchestrator | 2025-09-19 00:50:24.977165 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-19 00:50:24.977169 | orchestrator | Friday 19 September 2025 00:49:46 +0000 (0:00:00.378) 0:05:43.122 ****** 2025-09-19 00:50:24.977173 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977176 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977180 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977184 | orchestrator | 2025-09-19 00:50:24.977187 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-19 00:50:24.977191 | orchestrator | Friday 19 September 2025 00:49:47 +0000 (0:00:00.926) 0:05:44.049 ****** 2025-09-19 00:50:24.977195 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977198 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977202 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977206 | orchestrator | 2025-09-19 00:50:24.977209 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-19 00:50:24.977213 | orchestrator | Friday 19 September 2025 00:49:48 +0000 (0:00:00.909) 0:05:44.958 ****** 2025-09-19 00:50:24.977217 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977220 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977224 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977228 | orchestrator | 2025-09-19 00:50:24.977231 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-19 00:50:24.977238 | orchestrator | Friday 19 September 2025 00:49:49 +0000 (0:00:01.287) 0:05:46.245 ****** 2025-09-19 00:50:24.977242 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.977246 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.977249 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.977253 | orchestrator | 2025-09-19 00:50:24.977257 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-19 00:50:24.977260 | orchestrator | Friday 19 September 2025 00:49:54 +0000 (0:00:04.731) 0:05:50.977 ****** 2025-09-19 00:50:24.977264 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977268 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977271 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977275 | orchestrator | 2025-09-19 00:50:24.977279 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-19 00:50:24.977282 | orchestrator | Friday 19 September 2025 00:49:57 +0000 (0:00:02.764) 0:05:53.741 ****** 2025-09-19 00:50:24.977286 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.977290 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.977294 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.977297 | orchestrator | 2025-09-19 00:50:24.977301 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-19 00:50:24.977305 | orchestrator | Friday 19 September 2025 00:50:09 +0000 (0:00:12.400) 0:06:06.142 ****** 2025-09-19 00:50:24.977308 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977312 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977316 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977319 | orchestrator | 2025-09-19 00:50:24.977325 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-19 00:50:24.977329 | orchestrator | Friday 19 September 2025 00:50:10 +0000 (0:00:00.740) 0:06:06.882 ****** 2025-09-19 00:50:24.977333 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:50:24.977337 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:50:24.977340 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:50:24.977344 | orchestrator | 2025-09-19 00:50:24.977348 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-19 00:50:24.977351 | orchestrator | Friday 19 September 2025 00:50:18 +0000 (0:00:08.607) 0:06:15.489 ****** 2025-09-19 00:50:24.977355 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977359 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977366 | orchestrator | 2025-09-19 00:50:24.977370 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-19 00:50:24.977374 | orchestrator | Friday 19 September 2025 00:50:19 +0000 (0:00:00.374) 0:06:15.864 ****** 2025-09-19 00:50:24.977377 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977381 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977385 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977388 | orchestrator | 2025-09-19 00:50:24.977392 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-19 00:50:24.977396 | orchestrator | Friday 19 September 2025 00:50:19 +0000 (0:00:00.337) 0:06:16.202 ****** 2025-09-19 00:50:24.977400 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977403 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977409 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977413 | orchestrator | 2025-09-19 00:50:24.977416 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-19 00:50:24.977420 | orchestrator | Friday 19 September 2025 00:50:19 +0000 (0:00:00.348) 0:06:16.550 ****** 2025-09-19 00:50:24.977424 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977428 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977431 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977435 | orchestrator | 2025-09-19 00:50:24.977439 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-19 00:50:24.977442 | orchestrator | Friday 19 September 2025 00:50:20 +0000 (0:00:00.721) 0:06:17.272 ****** 2025-09-19 00:50:24.977449 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977452 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977456 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977460 | orchestrator | 2025-09-19 00:50:24.977463 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-19 00:50:24.977467 | orchestrator | Friday 19 September 2025 00:50:20 +0000 (0:00:00.372) 0:06:17.644 ****** 2025-09-19 00:50:24.977471 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:50:24.977475 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:50:24.977478 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:50:24.977482 | orchestrator | 2025-09-19 00:50:24.977486 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-19 00:50:24.977489 | orchestrator | Friday 19 September 2025 00:50:21 +0000 (0:00:00.398) 0:06:18.043 ****** 2025-09-19 00:50:24.977493 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977497 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977500 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977504 | orchestrator | 2025-09-19 00:50:24.977508 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-19 00:50:24.977512 | orchestrator | Friday 19 September 2025 00:50:22 +0000 (0:00:01.336) 0:06:19.379 ****** 2025-09-19 00:50:24.977515 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:50:24.977519 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:50:24.977523 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:50:24.977526 | orchestrator | 2025-09-19 00:50:24.977530 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:50:24.977534 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 00:50:24.977538 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 00:50:24.977542 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 00:50:24.977546 | orchestrator | 2025-09-19 00:50:24.977549 | orchestrator | 2025-09-19 00:50:24.977553 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:50:24.977557 | orchestrator | Friday 19 September 2025 00:50:24 +0000 (0:00:01.334) 0:06:20.714 ****** 2025-09-19 00:50:24.977561 | orchestrator | =============================================================================== 2025-09-19 00:50:24.977564 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.40s 2025-09-19 00:50:24.977568 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.61s 2025-09-19 00:50:24.977572 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.93s 2025-09-19 00:50:24.977575 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.42s 2025-09-19 00:50:24.977579 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.16s 2025-09-19 00:50:24.977583 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.97s 2025-09-19 00:50:24.977586 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.87s 2025-09-19 00:50:24.977590 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.83s 2025-09-19 00:50:24.977594 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.73s 2025-09-19 00:50:24.977597 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.60s 2025-09-19 00:50:24.977601 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.46s 2025-09-19 00:50:24.977607 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.36s 2025-09-19 00:50:24.977611 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.34s 2025-09-19 00:50:24.977614 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.18s 2025-09-19 00:50:24.977621 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.09s 2025-09-19 00:50:24.977625 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.00s 2025-09-19 00:50:24.977629 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.94s 2025-09-19 00:50:24.977632 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.81s 2025-09-19 00:50:24.977636 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.56s 2025-09-19 00:50:24.977640 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.51s 2025-09-19 00:50:24.977643 | orchestrator | 2025-09-19 00:50:24 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:24.977647 | orchestrator | 2025-09-19 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:27.997767 | orchestrator | 2025-09-19 00:50:27 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:27.998645 | orchestrator | 2025-09-19 00:50:27 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:27.999647 | orchestrator | 2025-09-19 00:50:27 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:27.999673 | orchestrator | 2025-09-19 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:31.055645 | orchestrator | 2025-09-19 00:50:31 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:31.057745 | orchestrator | 2025-09-19 00:50:31 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:31.059159 | orchestrator | 2025-09-19 00:50:31 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:31.059388 | orchestrator | 2025-09-19 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:34.095870 | orchestrator | 2025-09-19 00:50:34 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:34.097674 | orchestrator | 2025-09-19 00:50:34 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:34.103357 | orchestrator | 2025-09-19 00:50:34 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:34.103410 | orchestrator | 2025-09-19 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:37.142632 | orchestrator | 2025-09-19 00:50:37 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:37.144213 | orchestrator | 2025-09-19 00:50:37 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:37.146268 | orchestrator | 2025-09-19 00:50:37 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:37.146355 | orchestrator | 2025-09-19 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:40.199503 | orchestrator | 2025-09-19 00:50:40 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:40.201389 | orchestrator | 2025-09-19 00:50:40 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:40.203718 | orchestrator | 2025-09-19 00:50:40 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:40.203767 | orchestrator | 2025-09-19 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:43.234850 | orchestrator | 2025-09-19 00:50:43 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:43.236138 | orchestrator | 2025-09-19 00:50:43 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:43.236880 | orchestrator | 2025-09-19 00:50:43 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:43.236905 | orchestrator | 2025-09-19 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:46.269595 | orchestrator | 2025-09-19 00:50:46 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:46.269679 | orchestrator | 2025-09-19 00:50:46 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:46.270368 | orchestrator | 2025-09-19 00:50:46 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:46.270465 | orchestrator | 2025-09-19 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:49.298718 | orchestrator | 2025-09-19 00:50:49 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:49.301460 | orchestrator | 2025-09-19 00:50:49 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:49.303379 | orchestrator | 2025-09-19 00:50:49 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:49.303434 | orchestrator | 2025-09-19 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:52.333347 | orchestrator | 2025-09-19 00:50:52 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:52.334339 | orchestrator | 2025-09-19 00:50:52 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:52.334950 | orchestrator | 2025-09-19 00:50:52 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:52.335123 | orchestrator | 2025-09-19 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:55.358933 | orchestrator | 2025-09-19 00:50:55 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:55.359022 | orchestrator | 2025-09-19 00:50:55 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:55.359774 | orchestrator | 2025-09-19 00:50:55 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:55.359820 | orchestrator | 2025-09-19 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:50:58.401680 | orchestrator | 2025-09-19 00:50:58 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:50:58.402224 | orchestrator | 2025-09-19 00:50:58 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:50:58.403584 | orchestrator | 2025-09-19 00:50:58 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:50:58.403616 | orchestrator | 2025-09-19 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:01.458289 | orchestrator | 2025-09-19 00:51:01 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:01.461855 | orchestrator | 2025-09-19 00:51:01 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:01.463116 | orchestrator | 2025-09-19 00:51:01 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:01.463792 | orchestrator | 2025-09-19 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:04.497955 | orchestrator | 2025-09-19 00:51:04 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:04.498558 | orchestrator | 2025-09-19 00:51:04 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:04.504105 | orchestrator | 2025-09-19 00:51:04 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:04.504180 | orchestrator | 2025-09-19 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:07.542271 | orchestrator | 2025-09-19 00:51:07 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:07.543473 | orchestrator | 2025-09-19 00:51:07 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:07.544339 | orchestrator | 2025-09-19 00:51:07 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:07.544361 | orchestrator | 2025-09-19 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:10.586601 | orchestrator | 2025-09-19 00:51:10 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:10.588220 | orchestrator | 2025-09-19 00:51:10 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:10.591400 | orchestrator | 2025-09-19 00:51:10 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:10.591838 | orchestrator | 2025-09-19 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:13.631849 | orchestrator | 2025-09-19 00:51:13 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:13.634352 | orchestrator | 2025-09-19 00:51:13 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:13.636909 | orchestrator | 2025-09-19 00:51:13 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:13.636955 | orchestrator | 2025-09-19 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:16.677770 | orchestrator | 2025-09-19 00:51:16 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:16.678531 | orchestrator | 2025-09-19 00:51:16 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:16.679499 | orchestrator | 2025-09-19 00:51:16 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:16.679883 | orchestrator | 2025-09-19 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:19.742722 | orchestrator | 2025-09-19 00:51:19 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:19.743490 | orchestrator | 2025-09-19 00:51:19 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:19.744956 | orchestrator | 2025-09-19 00:51:19 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:19.745061 | orchestrator | 2025-09-19 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:22.789715 | orchestrator | 2025-09-19 00:51:22 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:22.791793 | orchestrator | 2025-09-19 00:51:22 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:22.794245 | orchestrator | 2025-09-19 00:51:22 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:22.794278 | orchestrator | 2025-09-19 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:25.841259 | orchestrator | 2025-09-19 00:51:25 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:25.842855 | orchestrator | 2025-09-19 00:51:25 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:25.844393 | orchestrator | 2025-09-19 00:51:25 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:25.844468 | orchestrator | 2025-09-19 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:28.882299 | orchestrator | 2025-09-19 00:51:28 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:28.883170 | orchestrator | 2025-09-19 00:51:28 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:28.886098 | orchestrator | 2025-09-19 00:51:28 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:28.886137 | orchestrator | 2025-09-19 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:31.945416 | orchestrator | 2025-09-19 00:51:31 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:31.947604 | orchestrator | 2025-09-19 00:51:31 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:31.949881 | orchestrator | 2025-09-19 00:51:31 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:31.949927 | orchestrator | 2025-09-19 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:34.994927 | orchestrator | 2025-09-19 00:51:34 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:34.996286 | orchestrator | 2025-09-19 00:51:34 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:34.998500 | orchestrator | 2025-09-19 00:51:34 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:34.998565 | orchestrator | 2025-09-19 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:38.050345 | orchestrator | 2025-09-19 00:51:38 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:38.052649 | orchestrator | 2025-09-19 00:51:38 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:38.054987 | orchestrator | 2025-09-19 00:51:38 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:38.055018 | orchestrator | 2025-09-19 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:41.099434 | orchestrator | 2025-09-19 00:51:41 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:41.100922 | orchestrator | 2025-09-19 00:51:41 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:41.102240 | orchestrator | 2025-09-19 00:51:41 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:41.102441 | orchestrator | 2025-09-19 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:44.154382 | orchestrator | 2025-09-19 00:51:44 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:44.155365 | orchestrator | 2025-09-19 00:51:44 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:44.160919 | orchestrator | 2025-09-19 00:51:44 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:44.160993 | orchestrator | 2025-09-19 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:47.198267 | orchestrator | 2025-09-19 00:51:47 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:47.198338 | orchestrator | 2025-09-19 00:51:47 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:47.202208 | orchestrator | 2025-09-19 00:51:47 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:47.202276 | orchestrator | 2025-09-19 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:50.258869 | orchestrator | 2025-09-19 00:51:50 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:50.260149 | orchestrator | 2025-09-19 00:51:50 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:50.262672 | orchestrator | 2025-09-19 00:51:50 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:50.262709 | orchestrator | 2025-09-19 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:53.318567 | orchestrator | 2025-09-19 00:51:53 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:53.320259 | orchestrator | 2025-09-19 00:51:53 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:53.322601 | orchestrator | 2025-09-19 00:51:53 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:53.322689 | orchestrator | 2025-09-19 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:56.368366 | orchestrator | 2025-09-19 00:51:56 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:56.370480 | orchestrator | 2025-09-19 00:51:56 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:56.373730 | orchestrator | 2025-09-19 00:51:56 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:56.373757 | orchestrator | 2025-09-19 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:51:59.424386 | orchestrator | 2025-09-19 00:51:59 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:51:59.426143 | orchestrator | 2025-09-19 00:51:59 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:51:59.427540 | orchestrator | 2025-09-19 00:51:59 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:51:59.427583 | orchestrator | 2025-09-19 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:02.477640 | orchestrator | 2025-09-19 00:52:02 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:02.479007 | orchestrator | 2025-09-19 00:52:02 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:02.482748 | orchestrator | 2025-09-19 00:52:02 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:02.482831 | orchestrator | 2025-09-19 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:05.519035 | orchestrator | 2025-09-19 00:52:05 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:05.520508 | orchestrator | 2025-09-19 00:52:05 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:05.521285 | orchestrator | 2025-09-19 00:52:05 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:05.521695 | orchestrator | 2025-09-19 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:08.564627 | orchestrator | 2025-09-19 00:52:08 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:08.565760 | orchestrator | 2025-09-19 00:52:08 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:08.567403 | orchestrator | 2025-09-19 00:52:08 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:08.567461 | orchestrator | 2025-09-19 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:11.620221 | orchestrator | 2025-09-19 00:52:11 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:11.622966 | orchestrator | 2025-09-19 00:52:11 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:11.625723 | orchestrator | 2025-09-19 00:52:11 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:11.625851 | orchestrator | 2025-09-19 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:14.669656 | orchestrator | 2025-09-19 00:52:14 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:14.672191 | orchestrator | 2025-09-19 00:52:14 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:14.674542 | orchestrator | 2025-09-19 00:52:14 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:14.675026 | orchestrator | 2025-09-19 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:17.720413 | orchestrator | 2025-09-19 00:52:17 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:17.722672 | orchestrator | 2025-09-19 00:52:17 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:17.724555 | orchestrator | 2025-09-19 00:52:17 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:17.724602 | orchestrator | 2025-09-19 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:20.764764 | orchestrator | 2025-09-19 00:52:20 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:20.766530 | orchestrator | 2025-09-19 00:52:20 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:20.768010 | orchestrator | 2025-09-19 00:52:20 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:20.768035 | orchestrator | 2025-09-19 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:23.815580 | orchestrator | 2025-09-19 00:52:23 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:23.817238 | orchestrator | 2025-09-19 00:52:23 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:23.818772 | orchestrator | 2025-09-19 00:52:23 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:23.818803 | orchestrator | 2025-09-19 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:26.870117 | orchestrator | 2025-09-19 00:52:26 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:26.870869 | orchestrator | 2025-09-19 00:52:26 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:26.872378 | orchestrator | 2025-09-19 00:52:26 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:26.872522 | orchestrator | 2025-09-19 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:29.930341 | orchestrator | 2025-09-19 00:52:29 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:29.931034 | orchestrator | 2025-09-19 00:52:29 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:29.932178 | orchestrator | 2025-09-19 00:52:29 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:29.932309 | orchestrator | 2025-09-19 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:32.978480 | orchestrator | 2025-09-19 00:52:32 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:32.980485 | orchestrator | 2025-09-19 00:52:32 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:32.982869 | orchestrator | 2025-09-19 00:52:32 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:32.983110 | orchestrator | 2025-09-19 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:36.050695 | orchestrator | 2025-09-19 00:52:36 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:36.051818 | orchestrator | 2025-09-19 00:52:36 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state STARTED 2025-09-19 00:52:36.054624 | orchestrator | 2025-09-19 00:52:36 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:36.054680 | orchestrator | 2025-09-19 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:39.099382 | orchestrator | 2025-09-19 00:52:39 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:39.106966 | orchestrator | 2025-09-19 00:52:39 | INFO  | Task 707624ed-d7b8-4737-bba4-a9ff49d02733 is in state SUCCESS 2025-09-19 00:52:39.108941 | orchestrator | 2025-09-19 00:52:39.108992 | orchestrator | 2025-09-19 00:52:39.109012 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-19 00:52:39.109032 | orchestrator | 2025-09-19 00:52:39.109051 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 00:52:39.109208 | orchestrator | Friday 19 September 2025 00:41:43 +0000 (0:00:00.782) 0:00:00.782 ****** 2025-09-19 00:52:39.109223 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.109236 | orchestrator | 2025-09-19 00:52:39.109247 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 00:52:39.109258 | orchestrator | Friday 19 September 2025 00:41:44 +0000 (0:00:01.364) 0:00:02.146 ****** 2025-09-19 00:52:39.109269 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.109281 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.109292 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.109302 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.109313 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.109323 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.109334 | orchestrator | 2025-09-19 00:52:39.109345 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 00:52:39.109391 | orchestrator | Friday 19 September 2025 00:41:45 +0000 (0:00:01.452) 0:00:03.599 ****** 2025-09-19 00:52:39.109403 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.109414 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.109425 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.109435 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.109446 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.109456 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.109467 | orchestrator | 2025-09-19 00:52:39.109477 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 00:52:39.109491 | orchestrator | Friday 19 September 2025 00:41:46 +0000 (0:00:00.678) 0:00:04.277 ****** 2025-09-19 00:52:39.109504 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.109516 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.109528 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.109540 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.109553 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.109564 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.109575 | orchestrator | 2025-09-19 00:52:39.109585 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 00:52:39.109596 | orchestrator | Friday 19 September 2025 00:41:47 +0000 (0:00:01.122) 0:00:05.399 ****** 2025-09-19 00:52:39.109607 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.109643 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.109654 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.109664 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.109675 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.109709 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.109722 | orchestrator | 2025-09-19 00:52:39.109733 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 00:52:39.109744 | orchestrator | Friday 19 September 2025 00:41:48 +0000 (0:00:00.748) 0:00:06.148 ****** 2025-09-19 00:52:39.109755 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.109765 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.109776 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.109786 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.109797 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.109808 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.109818 | orchestrator | 2025-09-19 00:52:39.109829 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 00:52:39.109839 | orchestrator | Friday 19 September 2025 00:41:49 +0000 (0:00:00.625) 0:00:06.773 ****** 2025-09-19 00:52:39.109850 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.109860 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.109891 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.109902 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.109913 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.109923 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.109934 | orchestrator | 2025-09-19 00:52:39.109945 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 00:52:39.109956 | orchestrator | Friday 19 September 2025 00:41:50 +0000 (0:00:00.920) 0:00:07.693 ****** 2025-09-19 00:52:39.109967 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.109978 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.109989 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.109999 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.110010 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.110072 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.110085 | orchestrator | 2025-09-19 00:52:39.110095 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 00:52:39.110106 | orchestrator | Friday 19 September 2025 00:41:51 +0000 (0:00:01.137) 0:00:08.831 ****** 2025-09-19 00:52:39.110117 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.110128 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.110138 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.110149 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.110159 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.110170 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.110181 | orchestrator | 2025-09-19 00:52:39.110191 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 00:52:39.110202 | orchestrator | Friday 19 September 2025 00:41:51 +0000 (0:00:00.785) 0:00:09.616 ****** 2025-09-19 00:52:39.110213 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:52:39.110224 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:52:39.110235 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:52:39.110246 | orchestrator | 2025-09-19 00:52:39.110257 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 00:52:39.110281 | orchestrator | Friday 19 September 2025 00:41:52 +0000 (0:00:00.792) 0:00:10.409 ****** 2025-09-19 00:52:39.110292 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.110303 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.110314 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.110325 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.110474 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.110486 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.110497 | orchestrator | 2025-09-19 00:52:39.110521 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 00:52:39.110543 | orchestrator | Friday 19 September 2025 00:41:53 +0000 (0:00:01.079) 0:00:11.489 ****** 2025-09-19 00:52:39.110554 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:52:39.110565 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:52:39.110576 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:52:39.110586 | orchestrator | 2025-09-19 00:52:39.110597 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 00:52:39.110608 | orchestrator | Friday 19 September 2025 00:41:56 +0000 (0:00:03.141) 0:00:14.630 ****** 2025-09-19 00:52:39.110619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:52:39.110630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:52:39.110641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:52:39.110652 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.110662 | orchestrator | 2025-09-19 00:52:39.110673 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 00:52:39.110684 | orchestrator | Friday 19 September 2025 00:41:57 +0000 (0:00:00.564) 0:00:15.195 ****** 2025-09-19 00:52:39.110696 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110711 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110733 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.110744 | orchestrator | 2025-09-19 00:52:39.110755 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 00:52:39.110765 | orchestrator | Friday 19 September 2025 00:41:58 +0000 (0:00:01.016) 0:00:16.212 ****** 2025-09-19 00:52:39.110778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110792 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110803 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110814 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.110825 | orchestrator | 2025-09-19 00:52:39.110836 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 00:52:39.110847 | orchestrator | Friday 19 September 2025 00:41:59 +0000 (0:00:00.904) 0:00:17.117 ****** 2025-09-19 00:52:39.110892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 00:41:54.549637', 'end': '2025-09-19 00:41:54.837123', 'delta': '0:00:00.287486', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110925 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 00:41:55.609802', 'end': '2025-09-19 00:41:55.902082', 'delta': '0:00:00.292280', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 00:41:56.556763', 'end': '2025-09-19 00:41:56.864077', 'delta': '0:00:00.307314', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.110950 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.110960 | orchestrator | 2025-09-19 00:52:39.110971 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 00:52:39.110982 | orchestrator | Friday 19 September 2025 00:41:59 +0000 (0:00:00.198) 0:00:17.315 ****** 2025-09-19 00:52:39.110992 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.111003 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.111014 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.111024 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.111035 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.111045 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.111056 | orchestrator | 2025-09-19 00:52:39.111066 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 00:52:39.111077 | orchestrator | Friday 19 September 2025 00:42:01 +0000 (0:00:01.693) 0:00:19.008 ****** 2025-09-19 00:52:39.111087 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.111098 | orchestrator | 2025-09-19 00:52:39.111109 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 00:52:39.111119 | orchestrator | Friday 19 September 2025 00:42:02 +0000 (0:00:00.727) 0:00:19.736 ****** 2025-09-19 00:52:39.111130 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111141 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.111152 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.111162 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.111173 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.111183 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.111194 | orchestrator | 2025-09-19 00:52:39.111204 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 00:52:39.111222 | orchestrator | Friday 19 September 2025 00:42:03 +0000 (0:00:01.770) 0:00:21.506 ****** 2025-09-19 00:52:39.111233 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111243 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.111254 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.111264 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.111274 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.111285 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.111295 | orchestrator | 2025-09-19 00:52:39.111306 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 00:52:39.111316 | orchestrator | Friday 19 September 2025 00:42:06 +0000 (0:00:02.338) 0:00:23.845 ****** 2025-09-19 00:52:39.111327 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111337 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.111348 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.111358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.111369 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.111379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.111389 | orchestrator | 2025-09-19 00:52:39.111400 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 00:52:39.111410 | orchestrator | Friday 19 September 2025 00:42:07 +0000 (0:00:00.955) 0:00:24.800 ****** 2025-09-19 00:52:39.111421 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111431 | orchestrator | 2025-09-19 00:52:39.111442 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 00:52:39.111452 | orchestrator | Friday 19 September 2025 00:42:07 +0000 (0:00:00.145) 0:00:24.946 ****** 2025-09-19 00:52:39.111463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111474 | orchestrator | 2025-09-19 00:52:39.111484 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 00:52:39.111495 | orchestrator | Friday 19 September 2025 00:42:07 +0000 (0:00:00.230) 0:00:25.176 ****** 2025-09-19 00:52:39.111505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111521 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.111532 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.111543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.111553 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.111564 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.111575 | orchestrator | 2025-09-19 00:52:39.111585 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 00:52:39.111604 | orchestrator | Friday 19 September 2025 00:42:08 +0000 (0:00:00.510) 0:00:25.687 ****** 2025-09-19 00:52:39.111615 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111626 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.111636 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.111647 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.111657 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.111668 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.111678 | orchestrator | 2025-09-19 00:52:39.111689 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 00:52:39.111699 | orchestrator | Friday 19 September 2025 00:42:08 +0000 (0:00:00.857) 0:00:26.544 ****** 2025-09-19 00:52:39.111710 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.111720 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.111731 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.111742 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.111977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.111998 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.112009 | orchestrator | 2025-09-19 00:52:39.112019 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 00:52:39.112030 | orchestrator | Friday 19 September 2025 00:42:09 +0000 (0:00:00.636) 0:00:27.180 ****** 2025-09-19 00:52:39.112041 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.112060 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.112071 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.112081 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.112092 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.112102 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.112112 | orchestrator | 2025-09-19 00:52:39.112123 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 00:52:39.112134 | orchestrator | Friday 19 September 2025 00:42:10 +0000 (0:00:00.814) 0:00:27.995 ****** 2025-09-19 00:52:39.112145 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.112155 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.112166 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.112176 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.112186 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.112197 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.112208 | orchestrator | 2025-09-19 00:52:39.112219 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 00:52:39.112229 | orchestrator | Friday 19 September 2025 00:42:10 +0000 (0:00:00.611) 0:00:28.606 ****** 2025-09-19 00:52:39.112240 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.112250 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.112261 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.112272 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.112282 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.112292 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.112303 | orchestrator | 2025-09-19 00:52:39.112313 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 00:52:39.112324 | orchestrator | Friday 19 September 2025 00:42:11 +0000 (0:00:00.593) 0:00:29.200 ****** 2025-09-19 00:52:39.112333 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.112342 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.112352 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.112361 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.112370 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.112379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.112389 | orchestrator | 2025-09-19 00:52:39.112445 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 00:52:39.112456 | orchestrator | Friday 19 September 2025 00:42:12 +0000 (0:00:00.546) 0:00:29.746 ****** 2025-09-19 00:52:39.112466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part1', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part14', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part15', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part16', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.112644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.112656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part1', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part14', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part15', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part16', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.112771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.112781 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.112791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.112997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113035 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.113045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c', 'dm-uuid-LVM-peC7EuXhUExYM0OH9W5LUB0gTfq5Mn8XZy9S1dInyYzQKePf1K4F5F6btSROVcVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e', 'dm-uuid-LVM-0kq9LsH3khMJXJBPflnAmhtw6k1LWcFzdDuhao44bI7HhFDExFCqRk8a5Qivdga7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113150 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.113160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKNIfe-zvQ5-lQVM-MW32-ccVT-C3aW-1GkH9A', 'scsi-0QEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d', 'scsi-SQEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ENgUQe-8clb-uPlh-t6js-QVpE-6mC2-oty0V6', 'scsi-0QEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f', 'scsi-SQEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110', 'dm-uuid-LVM-QN79jZEdFpP77x7qseaJoi73CZZdfAzmIlGGe0MpjgLncX42KretcJTX8BTrz4ED'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402', 'scsi-SQEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b', 'dm-uuid-LVM-KZZmEP1zkNZJvI2exmJffXX1NUziEioMheeu9yKxf1jgKqdEs9cMHQIipJtMU6aq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113336 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.113344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wu4HNx-Ix3l-9Lrf-RNoI-j8Qb-7eYo-keRwP1', 'scsi-0QEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3', 'scsi-SQEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546', 'dm-uuid-LVM-lfAlIdHrcDtGyKUEF5i0CQ7AW9WuYdFAvIs32dguFQnfBxTP0vlKeXjJ6EmldXOP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LH0ZRy-8fTh-qjKT-TbcL-BpOd-D3RO-A7MJtR', 'scsi-0QEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521', 'scsi-SQEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd', 'dm-uuid-LVM-2H1nJgTXIAlKzWZYKQGW3oGBiSW0fcaFILONhV774LItMWxXgUUO6WPV1hxOidff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe', 'scsi-SQEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113556 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.113572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:52:39.113681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JV9It9-RrIQ-nRF5-y62U-tOHg-Lev3-DHJjFv', 'scsi-0QEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4', 'scsi-SQEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aLKzS5-GI8w-bf2n-GZAt-rqsY-9oL4-1Oti50', 'scsi-0QEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd', 'scsi-SQEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576', 'scsi-SQEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:52:39.113740 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.113748 | orchestrator | 2025-09-19 00:52:39.113756 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 00:52:39.113764 | orchestrator | Friday 19 September 2025 00:42:14 +0000 (0:00:02.191) 0:00:31.938 ****** 2025-09-19 00:52:39.113773 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113781 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113796 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113812 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113824 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113846 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.113855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part1', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part14', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part15', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part16', 'scsi-SQEMU_QEMU_HARDDISK_45faf48c-5427-4049-a3b0-222ba6087f49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114753 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114782 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.114791 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114800 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114818 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114827 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114835 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114843 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114898 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114909 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114918 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part1', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part14', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part15', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part16', 'scsi-SQEMU_QEMU_HARDDISK_a889b39a-32f5-4a00-874e-3d7c73e2372c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114937 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.114946 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.115381 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115398 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115416 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115425 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115433 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115441 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115507 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115519 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115537 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_199ddf9d-b638-421d-a1bc-96e0d48590a2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115546 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115558 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.115619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c', 'dm-uuid-LVM-peC7EuXhUExYM0OH9W5LUB0gTfq5Mn8XZy9S1dInyYzQKePf1K4F5F6btSROVcVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e', 'dm-uuid-LVM-0kq9LsH3khMJXJBPflnAmhtw6k1LWcFzdDuhao44bI7HhFDExFCqRk8a5Qivdga7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKNIfe-zvQ5-lQVM-MW32-ccVT-C3aW-1GkH9A', 'scsi-0QEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d', 'scsi-SQEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ENgUQe-8clb-uPlh-t6js-QVpE-6mC2-oty0V6', 'scsi-0QEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f', 'scsi-SQEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402', 'scsi-SQEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.115933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110', 'dm-uuid-LVM-QN79jZEdFpP77x7qseaJoi73CZZdfAzmIlGGe0MpjgLncX42KretcJTX8BTrz4ED'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116021 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.116029 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b', 'dm-uuid-LVM-KZZmEP1zkNZJvI2exmJffXX1NUziEioMheeu9yKxf1jgKqdEs9cMHQIipJtMU6aq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116265 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wu4HNx-Ix3l-9Lrf-RNoI-j8Qb-7eYo-keRwP1', 'scsi-0QEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3', 'scsi-SQEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116274 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LH0ZRy-8fTh-qjKT-TbcL-BpOd-D3RO-A7MJtR', 'scsi-0QEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521', 'scsi-SQEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe', 'scsi-SQEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546', 'dm-uuid-LVM-lfAlIdHrcDtGyKUEF5i0CQ7AW9WuYdFAvIs32dguFQnfBxTP0vlKeXjJ6EmldXOP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd', 'dm-uuid-LVM-2H1nJgTXIAlKzWZYKQGW3oGBiSW0fcaFILONhV774LItMWxXgUUO6WPV1hxOidff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116412 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.116419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116505 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116523 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116602 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JV9It9-RrIQ-nRF5-y62U-tOHg-Lev3-DHJjFv', 'scsi-0QEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4', 'scsi-SQEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aLKzS5-GI8w-bf2n-GZAt-rqsY-9oL4-1Oti50', 'scsi-0QEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd', 'scsi-SQEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576', 'scsi-SQEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:52:39.116638 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.116645 | orchestrator | 2025-09-19 00:52:39.116652 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 00:52:39.116659 | orchestrator | Friday 19 September 2025 00:42:16 +0000 (0:00:01.712) 0:00:33.650 ****** 2025-09-19 00:52:39.116666 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.116673 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.116679 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.116728 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.116737 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.116744 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.116750 | orchestrator | 2025-09-19 00:52:39.116757 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 00:52:39.116763 | orchestrator | Friday 19 September 2025 00:42:17 +0000 (0:00:01.102) 0:00:34.753 ****** 2025-09-19 00:52:39.116770 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.116776 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.116783 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.116800 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.116806 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.116813 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.116819 | orchestrator | 2025-09-19 00:52:39.116826 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 00:52:39.116833 | orchestrator | Friday 19 September 2025 00:42:18 +0000 (0:00:00.993) 0:00:35.746 ****** 2025-09-19 00:52:39.116839 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.116846 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.116852 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.116859 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.116882 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.116889 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.116895 | orchestrator | 2025-09-19 00:52:39.116902 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 00:52:39.116908 | orchestrator | Friday 19 September 2025 00:42:18 +0000 (0:00:00.847) 0:00:36.594 ****** 2025-09-19 00:52:39.116915 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.116921 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.116928 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.116935 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.116945 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.116955 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.116965 | orchestrator | 2025-09-19 00:52:39.116975 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 00:52:39.116987 | orchestrator | Friday 19 September 2025 00:42:19 +0000 (0:00:00.795) 0:00:37.389 ****** 2025-09-19 00:52:39.116994 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.117001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.117007 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.117014 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117020 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.117027 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.117033 | orchestrator | 2025-09-19 00:52:39.117039 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 00:52:39.117046 | orchestrator | Friday 19 September 2025 00:42:20 +0000 (0:00:00.839) 0:00:38.228 ****** 2025-09-19 00:52:39.117062 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.117069 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.117080 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.117087 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117093 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.117100 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.117106 | orchestrator | 2025-09-19 00:52:39.117113 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 00:52:39.117119 | orchestrator | Friday 19 September 2025 00:42:21 +0000 (0:00:01.041) 0:00:39.270 ****** 2025-09-19 00:52:39.117126 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-19 00:52:39.117132 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-19 00:52:39.117139 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 00:52:39.117145 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-19 00:52:39.117152 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-19 00:52:39.117158 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:52:39.117165 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 00:52:39.117171 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-19 00:52:39.117177 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 00:52:39.117184 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 00:52:39.117190 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 00:52:39.117196 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-19 00:52:39.117203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 00:52:39.117209 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 00:52:39.117216 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 00:52:39.117222 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 00:52:39.117228 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 00:52:39.117235 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 00:52:39.117241 | orchestrator | 2025-09-19 00:52:39.117248 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 00:52:39.117254 | orchestrator | Friday 19 September 2025 00:42:25 +0000 (0:00:03.978) 0:00:43.249 ****** 2025-09-19 00:52:39.117261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:52:39.117267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:52:39.117274 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 00:52:39.117280 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 00:52:39.117287 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 00:52:39.117293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:52:39.117299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.117310 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 00:52:39.117316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 00:52:39.117323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 00:52:39.117329 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.117336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 00:52:39.117367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 00:52:39.117375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 00:52:39.117381 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.117387 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 00:52:39.117394 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 00:52:39.117400 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 00:52:39.117407 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117413 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.117424 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 00:52:39.117431 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 00:52:39.117437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 00:52:39.117443 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.117450 | orchestrator | 2025-09-19 00:52:39.117456 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 00:52:39.117463 | orchestrator | Friday 19 September 2025 00:42:26 +0000 (0:00:00.938) 0:00:44.188 ****** 2025-09-19 00:52:39.117470 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.117476 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.117483 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.117489 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.117496 | orchestrator | 2025-09-19 00:52:39.117503 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 00:52:39.117510 | orchestrator | Friday 19 September 2025 00:42:27 +0000 (0:00:01.111) 0:00:45.299 ****** 2025-09-19 00:52:39.117517 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117523 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.117530 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.117536 | orchestrator | 2025-09-19 00:52:39.117543 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 00:52:39.117549 | orchestrator | Friday 19 September 2025 00:42:28 +0000 (0:00:00.508) 0:00:45.808 ****** 2025-09-19 00:52:39.117556 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.117568 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.117587 | orchestrator | 2025-09-19 00:52:39.117594 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 00:52:39.117601 | orchestrator | Friday 19 September 2025 00:42:28 +0000 (0:00:00.536) 0:00:46.344 ****** 2025-09-19 00:52:39.117608 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117614 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.117621 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.117627 | orchestrator | 2025-09-19 00:52:39.117634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 00:52:39.117640 | orchestrator | Friday 19 September 2025 00:42:29 +0000 (0:00:00.381) 0:00:46.726 ****** 2025-09-19 00:52:39.117647 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.117653 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.117660 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.117667 | orchestrator | 2025-09-19 00:52:39.117673 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 00:52:39.117680 | orchestrator | Friday 19 September 2025 00:42:29 +0000 (0:00:00.444) 0:00:47.170 ****** 2025-09-19 00:52:39.117686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.117693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.117699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.117706 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117712 | orchestrator | 2025-09-19 00:52:39.117719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 00:52:39.117725 | orchestrator | Friday 19 September 2025 00:42:29 +0000 (0:00:00.395) 0:00:47.565 ****** 2025-09-19 00:52:39.117732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.117739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.117745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.117752 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117758 | orchestrator | 2025-09-19 00:52:39.117765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 00:52:39.117775 | orchestrator | Friday 19 September 2025 00:42:30 +0000 (0:00:00.403) 0:00:47.968 ****** 2025-09-19 00:52:39.117782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.117789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.117795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.117802 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.117808 | orchestrator | 2025-09-19 00:52:39.117815 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 00:52:39.117821 | orchestrator | Friday 19 September 2025 00:42:30 +0000 (0:00:00.334) 0:00:48.302 ****** 2025-09-19 00:52:39.117828 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.117834 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.117841 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.117847 | orchestrator | 2025-09-19 00:52:39.117854 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 00:52:39.117861 | orchestrator | Friday 19 September 2025 00:42:31 +0000 (0:00:00.521) 0:00:48.824 ****** 2025-09-19 00:52:39.117885 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 00:52:39.117892 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 00:52:39.117899 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 00:52:39.117905 | orchestrator | 2025-09-19 00:52:39.117912 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 00:52:39.117919 | orchestrator | Friday 19 September 2025 00:42:31 +0000 (0:00:00.737) 0:00:49.561 ****** 2025-09-19 00:52:39.117945 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:52:39.117953 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:52:39.117959 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:52:39.117966 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-19 00:52:39.117972 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 00:52:39.117979 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 00:52:39.117985 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 00:52:39.117992 | orchestrator | 2025-09-19 00:52:39.117998 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 00:52:39.118005 | orchestrator | Friday 19 September 2025 00:42:32 +0000 (0:00:00.807) 0:00:50.369 ****** 2025-09-19 00:52:39.118011 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:52:39.118042 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:52:39.118049 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:52:39.118055 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-19 00:52:39.118062 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 00:52:39.118068 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 00:52:39.118075 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 00:52:39.118081 | orchestrator | 2025-09-19 00:52:39.118088 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.118094 | orchestrator | Friday 19 September 2025 00:42:34 +0000 (0:00:01.709) 0:00:52.078 ****** 2025-09-19 00:52:39.118101 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.118109 | orchestrator | 2025-09-19 00:52:39.118116 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.118127 | orchestrator | Friday 19 September 2025 00:42:35 +0000 (0:00:01.074) 0:00:53.153 ****** 2025-09-19 00:52:39.118134 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.118141 | orchestrator | 2025-09-19 00:52:39.118147 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.118154 | orchestrator | Friday 19 September 2025 00:42:36 +0000 (0:00:01.019) 0:00:54.173 ****** 2025-09-19 00:52:39.118160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.118167 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.118173 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.118180 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.118187 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.118193 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.118200 | orchestrator | 2025-09-19 00:52:39.118206 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.118213 | orchestrator | Friday 19 September 2025 00:42:37 +0000 (0:00:00.896) 0:00:55.069 ****** 2025-09-19 00:52:39.118219 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118226 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118232 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118239 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118245 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118252 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118258 | orchestrator | 2025-09-19 00:52:39.118265 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.118272 | orchestrator | Friday 19 September 2025 00:42:38 +0000 (0:00:01.115) 0:00:56.184 ****** 2025-09-19 00:52:39.118278 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118285 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118291 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118298 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118304 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118311 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118317 | orchestrator | 2025-09-19 00:52:39.118324 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.118330 | orchestrator | Friday 19 September 2025 00:42:39 +0000 (0:00:01.360) 0:00:57.544 ****** 2025-09-19 00:52:39.118337 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118343 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118350 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118356 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118363 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118369 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118376 | orchestrator | 2025-09-19 00:52:39.118382 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.118389 | orchestrator | Friday 19 September 2025 00:42:40 +0000 (0:00:01.062) 0:00:58.607 ****** 2025-09-19 00:52:39.118396 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.118402 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.118409 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.118418 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.118425 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.118432 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.118438 | orchestrator | 2025-09-19 00:52:39.118445 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.118452 | orchestrator | Friday 19 September 2025 00:42:41 +0000 (0:00:01.027) 0:00:59.635 ****** 2025-09-19 00:52:39.118478 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118486 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118492 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118499 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.118505 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.118512 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.118522 | orchestrator | 2025-09-19 00:52:39.118529 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.118536 | orchestrator | Friday 19 September 2025 00:42:42 +0000 (0:00:00.593) 0:01:00.228 ****** 2025-09-19 00:52:39.118542 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118549 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118555 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118562 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.118568 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.118574 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.118581 | orchestrator | 2025-09-19 00:52:39.118587 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.118594 | orchestrator | Friday 19 September 2025 00:42:43 +0000 (0:00:00.762) 0:01:00.990 ****** 2025-09-19 00:52:39.118600 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.118607 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.118613 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.118620 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118626 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118633 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118639 | orchestrator | 2025-09-19 00:52:39.118646 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.118652 | orchestrator | Friday 19 September 2025 00:42:44 +0000 (0:00:01.425) 0:01:02.415 ****** 2025-09-19 00:52:39.118659 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.118665 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.118671 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.118678 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118684 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118691 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118697 | orchestrator | 2025-09-19 00:52:39.118704 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.118710 | orchestrator | Friday 19 September 2025 00:42:46 +0000 (0:00:01.924) 0:01:04.340 ****** 2025-09-19 00:52:39.118717 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118723 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118729 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118736 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.118742 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.118749 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.118755 | orchestrator | 2025-09-19 00:52:39.118762 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.118768 | orchestrator | Friday 19 September 2025 00:42:47 +0000 (0:00:00.707) 0:01:05.047 ****** 2025-09-19 00:52:39.118775 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.118781 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.118788 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.118794 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.118800 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.118807 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.118813 | orchestrator | 2025-09-19 00:52:39.118820 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.118826 | orchestrator | Friday 19 September 2025 00:42:48 +0000 (0:00:01.122) 0:01:06.169 ****** 2025-09-19 00:52:39.118833 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118846 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118852 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118858 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118904 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118912 | orchestrator | 2025-09-19 00:52:39.118918 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.118925 | orchestrator | Friday 19 September 2025 00:42:49 +0000 (0:00:01.110) 0:01:07.280 ****** 2025-09-19 00:52:39.118936 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.118943 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.118949 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.118956 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.118962 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.118969 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.118975 | orchestrator | 2025-09-19 00:52:39.118982 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.118988 | orchestrator | Friday 19 September 2025 00:42:50 +0000 (0:00:01.332) 0:01:08.613 ****** 2025-09-19 00:52:39.118995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119014 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.119021 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.119027 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.119034 | orchestrator | 2025-09-19 00:52:39.119040 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.119047 | orchestrator | Friday 19 September 2025 00:42:52 +0000 (0:00:01.108) 0:01:09.721 ****** 2025-09-19 00:52:39.119053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119060 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119066 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119073 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119079 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119092 | orchestrator | 2025-09-19 00:52:39.119098 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.119105 | orchestrator | Friday 19 September 2025 00:42:53 +0000 (0:00:01.303) 0:01:11.025 ****** 2025-09-19 00:52:39.119111 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119125 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119132 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119138 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119145 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119151 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119158 | orchestrator | 2025-09-19 00:52:39.119164 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.119191 | orchestrator | Friday 19 September 2025 00:42:54 +0000 (0:00:01.097) 0:01:12.123 ****** 2025-09-19 00:52:39.119199 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.119205 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.119211 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.119218 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119224 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119231 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119237 | orchestrator | 2025-09-19 00:52:39.119243 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.119250 | orchestrator | Friday 19 September 2025 00:42:55 +0000 (0:00:00.977) 0:01:13.100 ****** 2025-09-19 00:52:39.119256 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.119263 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.119269 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.119276 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.119282 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.119289 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.119295 | orchestrator | 2025-09-19 00:52:39.119302 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.119308 | orchestrator | Friday 19 September 2025 00:42:56 +0000 (0:00:00.648) 0:01:13.749 ****** 2025-09-19 00:52:39.119315 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.119321 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.119328 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.119334 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.119341 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.119352 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.119359 | orchestrator | 2025-09-19 00:52:39.119365 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-19 00:52:39.119372 | orchestrator | Friday 19 September 2025 00:42:57 +0000 (0:00:01.404) 0:01:15.153 ****** 2025-09-19 00:52:39.119378 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.119385 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.119391 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.119398 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.119404 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.119411 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.119417 | orchestrator | 2025-09-19 00:52:39.119424 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-19 00:52:39.119430 | orchestrator | Friday 19 September 2025 00:42:59 +0000 (0:00:02.095) 0:01:17.249 ****** 2025-09-19 00:52:39.119436 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.119442 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.119448 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.119454 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.119459 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.119465 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.119471 | orchestrator | 2025-09-19 00:52:39.119477 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-19 00:52:39.119483 | orchestrator | Friday 19 September 2025 00:43:02 +0000 (0:00:02.447) 0:01:19.697 ****** 2025-09-19 00:52:39.119489 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.119496 | orchestrator | 2025-09-19 00:52:39.119502 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-19 00:52:39.119508 | orchestrator | Friday 19 September 2025 00:43:03 +0000 (0:00:01.400) 0:01:21.098 ****** 2025-09-19 00:52:39.119514 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119520 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119526 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119532 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119538 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119544 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119550 | orchestrator | 2025-09-19 00:52:39.119556 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-19 00:52:39.119562 | orchestrator | Friday 19 September 2025 00:43:04 +0000 (0:00:00.884) 0:01:21.982 ****** 2025-09-19 00:52:39.119568 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119573 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119579 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119591 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119597 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119603 | orchestrator | 2025-09-19 00:52:39.119609 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-19 00:52:39.119615 | orchestrator | Friday 19 September 2025 00:43:04 +0000 (0:00:00.625) 0:01:22.608 ****** 2025-09-19 00:52:39.119622 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 00:52:39.119628 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 00:52:39.119634 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 00:52:39.119640 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 00:52:39.119646 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 00:52:39.119652 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 00:52:39.119663 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 00:52:39.119669 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 00:52:39.119675 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 00:52:39.119685 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 00:52:39.119691 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 00:52:39.119697 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 00:52:39.119703 | orchestrator | 2025-09-19 00:52:39.119726 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-19 00:52:39.119733 | orchestrator | Friday 19 September 2025 00:43:06 +0000 (0:00:01.931) 0:01:24.540 ****** 2025-09-19 00:52:39.119739 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.119745 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.119751 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.119757 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.119763 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.119769 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.119775 | orchestrator | 2025-09-19 00:52:39.119781 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-19 00:52:39.119787 | orchestrator | Friday 19 September 2025 00:43:07 +0000 (0:00:01.040) 0:01:25.580 ****** 2025-09-19 00:52:39.119793 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119799 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119805 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119812 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119817 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119823 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119829 | orchestrator | 2025-09-19 00:52:39.119835 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-19 00:52:39.119842 | orchestrator | Friday 19 September 2025 00:43:08 +0000 (0:00:00.827) 0:01:26.408 ****** 2025-09-19 00:52:39.119848 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119854 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119860 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119879 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119885 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119891 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119897 | orchestrator | 2025-09-19 00:52:39.119903 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-19 00:52:39.119909 | orchestrator | Friday 19 September 2025 00:43:09 +0000 (0:00:00.574) 0:01:26.983 ****** 2025-09-19 00:52:39.119915 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.119921 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.119927 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.119933 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.119939 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.119945 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.119951 | orchestrator | 2025-09-19 00:52:39.119958 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-19 00:52:39.119964 | orchestrator | Friday 19 September 2025 00:43:10 +0000 (0:00:00.793) 0:01:27.777 ****** 2025-09-19 00:52:39.119970 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.119976 | orchestrator | 2025-09-19 00:52:39.119982 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-19 00:52:39.119989 | orchestrator | Friday 19 September 2025 00:43:11 +0000 (0:00:01.157) 0:01:28.934 ****** 2025-09-19 00:52:39.119995 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.120001 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.120012 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.120018 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.120024 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.120030 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.120036 | orchestrator | 2025-09-19 00:52:39.120043 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-19 00:52:39.120049 | orchestrator | Friday 19 September 2025 00:43:51 +0000 (0:00:40.324) 0:02:09.259 ****** 2025-09-19 00:52:39.120055 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 00:52:39.120061 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 00:52:39.120067 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 00:52:39.120073 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120079 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 00:52:39.120085 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 00:52:39.120091 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 00:52:39.120097 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120103 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 00:52:39.120109 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 00:52:39.120116 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 00:52:39.120122 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120128 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 00:52:39.120134 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 00:52:39.120140 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 00:52:39.120146 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120152 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 00:52:39.120158 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 00:52:39.120164 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 00:52:39.120174 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120180 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 00:52:39.120186 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 00:52:39.120192 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 00:52:39.120215 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120222 | orchestrator | 2025-09-19 00:52:39.120228 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-19 00:52:39.120234 | orchestrator | Friday 19 September 2025 00:43:52 +0000 (0:00:00.813) 0:02:10.073 ****** 2025-09-19 00:52:39.120240 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120246 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120252 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120258 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120264 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120270 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120276 | orchestrator | 2025-09-19 00:52:39.120282 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-19 00:52:39.120288 | orchestrator | Friday 19 September 2025 00:43:53 +0000 (0:00:00.620) 0:02:10.693 ****** 2025-09-19 00:52:39.120294 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120300 | orchestrator | 2025-09-19 00:52:39.120306 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-19 00:52:39.120312 | orchestrator | Friday 19 September 2025 00:43:53 +0000 (0:00:00.131) 0:02:10.824 ****** 2025-09-19 00:52:39.120327 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120333 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120339 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120345 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120352 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120357 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120363 | orchestrator | 2025-09-19 00:52:39.120369 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-19 00:52:39.120375 | orchestrator | Friday 19 September 2025 00:43:54 +0000 (0:00:00.868) 0:02:11.693 ****** 2025-09-19 00:52:39.120381 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120387 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120393 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120399 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120405 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120411 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120417 | orchestrator | 2025-09-19 00:52:39.120423 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-19 00:52:39.120430 | orchestrator | Friday 19 September 2025 00:43:54 +0000 (0:00:00.631) 0:02:12.325 ****** 2025-09-19 00:52:39.120436 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120442 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120453 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120459 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120465 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120471 | orchestrator | 2025-09-19 00:52:39.120477 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-19 00:52:39.120483 | orchestrator | Friday 19 September 2025 00:43:55 +0000 (0:00:00.829) 0:02:13.154 ****** 2025-09-19 00:52:39.120489 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.120495 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.120502 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.120508 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.120514 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.120520 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.120526 | orchestrator | 2025-09-19 00:52:39.120532 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-19 00:52:39.120538 | orchestrator | Friday 19 September 2025 00:43:57 +0000 (0:00:01.953) 0:02:15.108 ****** 2025-09-19 00:52:39.120544 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.120550 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.120556 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.120562 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.120568 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.120574 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.120580 | orchestrator | 2025-09-19 00:52:39.120586 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-19 00:52:39.120592 | orchestrator | Friday 19 September 2025 00:43:58 +0000 (0:00:00.876) 0:02:15.984 ****** 2025-09-19 00:52:39.120598 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.120605 | orchestrator | 2025-09-19 00:52:39.120612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-19 00:52:39.120618 | orchestrator | Friday 19 September 2025 00:43:59 +0000 (0:00:01.265) 0:02:17.250 ****** 2025-09-19 00:52:39.120624 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120636 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120642 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120648 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120654 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120665 | orchestrator | 2025-09-19 00:52:39.120671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-19 00:52:39.120677 | orchestrator | Friday 19 September 2025 00:44:00 +0000 (0:00:00.684) 0:02:17.935 ****** 2025-09-19 00:52:39.120683 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120689 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120695 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120701 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120707 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120713 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120719 | orchestrator | 2025-09-19 00:52:39.120725 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-19 00:52:39.120731 | orchestrator | Friday 19 September 2025 00:44:01 +0000 (0:00:00.914) 0:02:18.849 ****** 2025-09-19 00:52:39.120740 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120747 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120752 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120758 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120764 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120770 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120776 | orchestrator | 2025-09-19 00:52:39.120782 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-19 00:52:39.120806 | orchestrator | Friday 19 September 2025 00:44:01 +0000 (0:00:00.493) 0:02:19.343 ****** 2025-09-19 00:52:39.120813 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120819 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120825 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120831 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120837 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120843 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120849 | orchestrator | 2025-09-19 00:52:39.120855 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-19 00:52:39.120861 | orchestrator | Friday 19 September 2025 00:44:02 +0000 (0:00:00.634) 0:02:19.978 ****** 2025-09-19 00:52:39.120878 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120885 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120891 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120897 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120903 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120909 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120915 | orchestrator | 2025-09-19 00:52:39.120921 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-19 00:52:39.120927 | orchestrator | Friday 19 September 2025 00:44:02 +0000 (0:00:00.642) 0:02:20.620 ****** 2025-09-19 00:52:39.120933 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120939 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.120945 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.120951 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.120958 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.120964 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.120970 | orchestrator | 2025-09-19 00:52:39.120976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-19 00:52:39.120982 | orchestrator | Friday 19 September 2025 00:44:03 +0000 (0:00:00.691) 0:02:21.312 ****** 2025-09-19 00:52:39.120988 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.120994 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.121000 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.121006 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.121012 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.121018 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.121024 | orchestrator | 2025-09-19 00:52:39.121030 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-19 00:52:39.121043 | orchestrator | Friday 19 September 2025 00:44:04 +0000 (0:00:00.654) 0:02:21.966 ****** 2025-09-19 00:52:39.121050 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.121056 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.121062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.121068 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.121074 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.121080 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.121086 | orchestrator | 2025-09-19 00:52:39.121092 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-19 00:52:39.121098 | orchestrator | Friday 19 September 2025 00:44:05 +0000 (0:00:00.701) 0:02:22.668 ****** 2025-09-19 00:52:39.121104 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.121110 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.121116 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.121122 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.121128 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.121135 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.121141 | orchestrator | 2025-09-19 00:52:39.121147 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-19 00:52:39.121153 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:00.982) 0:02:23.650 ****** 2025-09-19 00:52:39.121159 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.121165 | orchestrator | 2025-09-19 00:52:39.121171 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-19 00:52:39.121178 | orchestrator | Friday 19 September 2025 00:44:06 +0000 (0:00:00.890) 0:02:24.541 ****** 2025-09-19 00:52:39.121184 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-19 00:52:39.121190 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-19 00:52:39.121196 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-19 00:52:39.121202 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-19 00:52:39.121208 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-19 00:52:39.121214 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-19 00:52:39.121220 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-19 00:52:39.121226 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-19 00:52:39.121232 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-19 00:52:39.121238 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-19 00:52:39.121244 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-19 00:52:39.121250 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-19 00:52:39.121256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-19 00:52:39.121262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-19 00:52:39.121268 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-19 00:52:39.121275 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-19 00:52:39.121281 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-19 00:52:39.121290 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-19 00:52:39.121296 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-19 00:52:39.121302 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-19 00:52:39.121308 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-19 00:52:39.121331 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-19 00:52:39.121338 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-19 00:52:39.121344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-19 00:52:39.121351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-19 00:52:39.121361 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-19 00:52:39.121367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-19 00:52:39.121373 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-19 00:52:39.121379 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-19 00:52:39.121385 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-19 00:52:39.121391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-19 00:52:39.121397 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-19 00:52:39.121403 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-19 00:52:39.121409 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-19 00:52:39.121416 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-19 00:52:39.121422 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-19 00:52:39.121428 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-19 00:52:39.121434 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-19 00:52:39.121440 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-19 00:52:39.121446 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-19 00:52:39.121452 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-19 00:52:39.121458 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-19 00:52:39.121464 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-19 00:52:39.121470 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-19 00:52:39.121476 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-19 00:52:39.121482 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-19 00:52:39.121488 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-19 00:52:39.121495 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 00:52:39.121501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 00:52:39.121507 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-19 00:52:39.121513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 00:52:39.121519 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 00:52:39.121525 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 00:52:39.121531 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 00:52:39.121537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 00:52:39.121543 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 00:52:39.121550 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 00:52:39.121560 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 00:52:39.121569 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 00:52:39.121579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 00:52:39.121589 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 00:52:39.121600 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 00:52:39.121610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 00:52:39.121619 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 00:52:39.121629 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 00:52:39.121638 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 00:52:39.121655 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 00:52:39.121665 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 00:52:39.121674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 00:52:39.121684 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 00:52:39.121697 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 00:52:39.121707 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 00:52:39.121716 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 00:52:39.121726 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 00:52:39.121736 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 00:52:39.121751 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 00:52:39.121758 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 00:52:39.121764 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 00:52:39.121770 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 00:52:39.121800 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 00:52:39.121807 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 00:52:39.121813 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 00:52:39.121819 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 00:52:39.121825 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-19 00:52:39.121832 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-19 00:52:39.121838 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-19 00:52:39.121844 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 00:52:39.121850 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-19 00:52:39.121856 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-19 00:52:39.121862 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-19 00:52:39.121883 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-19 00:52:39.121890 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-19 00:52:39.121896 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-19 00:52:39.121902 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-19 00:52:39.121908 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-19 00:52:39.121914 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-19 00:52:39.121920 | orchestrator | 2025-09-19 00:52:39.121926 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-19 00:52:39.121932 | orchestrator | Friday 19 September 2025 00:44:13 +0000 (0:00:07.051) 0:02:31.593 ****** 2025-09-19 00:52:39.121938 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.121945 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.121951 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.121957 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.121963 | orchestrator | 2025-09-19 00:52:39.121969 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-19 00:52:39.121975 | orchestrator | Friday 19 September 2025 00:44:14 +0000 (0:00:00.962) 0:02:32.556 ****** 2025-09-19 00:52:39.121981 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.121988 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.121999 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122006 | orchestrator | 2025-09-19 00:52:39.122012 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-19 00:52:39.122039 | orchestrator | Friday 19 September 2025 00:44:15 +0000 (0:00:00.699) 0:02:33.255 ****** 2025-09-19 00:52:39.122046 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122052 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122058 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122064 | orchestrator | 2025-09-19 00:52:39.122070 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-19 00:52:39.122076 | orchestrator | Friday 19 September 2025 00:44:17 +0000 (0:00:01.438) 0:02:34.693 ****** 2025-09-19 00:52:39.122082 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122088 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122095 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122101 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.122107 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.122113 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.122119 | orchestrator | 2025-09-19 00:52:39.122125 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-19 00:52:39.122131 | orchestrator | Friday 19 September 2025 00:44:17 +0000 (0:00:00.877) 0:02:35.571 ****** 2025-09-19 00:52:39.122137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122143 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122155 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.122161 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.122167 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.122173 | orchestrator | 2025-09-19 00:52:39.122179 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-19 00:52:39.122186 | orchestrator | Friday 19 September 2025 00:44:18 +0000 (0:00:00.624) 0:02:36.195 ****** 2025-09-19 00:52:39.122192 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122198 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122204 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122210 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122216 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122239 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122245 | orchestrator | 2025-09-19 00:52:39.122251 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-19 00:52:39.122257 | orchestrator | Friday 19 September 2025 00:44:19 +0000 (0:00:00.942) 0:02:37.137 ****** 2025-09-19 00:52:39.122263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122270 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122295 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122302 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122308 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122314 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122320 | orchestrator | 2025-09-19 00:52:39.122326 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-19 00:52:39.122333 | orchestrator | Friday 19 September 2025 00:44:20 +0000 (0:00:00.586) 0:02:37.723 ****** 2025-09-19 00:52:39.122339 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122345 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122351 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122357 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122368 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122374 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122380 | orchestrator | 2025-09-19 00:52:39.122386 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-19 00:52:39.122393 | orchestrator | Friday 19 September 2025 00:44:21 +0000 (0:00:00.956) 0:02:38.680 ****** 2025-09-19 00:52:39.122399 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122405 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122411 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122417 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122423 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122429 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122435 | orchestrator | 2025-09-19 00:52:39.122441 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-19 00:52:39.122447 | orchestrator | Friday 19 September 2025 00:44:21 +0000 (0:00:00.664) 0:02:39.344 ****** 2025-09-19 00:52:39.122453 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122459 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122471 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122477 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122483 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122489 | orchestrator | 2025-09-19 00:52:39.122495 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-19 00:52:39.122502 | orchestrator | Friday 19 September 2025 00:44:22 +0000 (0:00:00.781) 0:02:40.126 ****** 2025-09-19 00:52:39.122508 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122514 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122520 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122526 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122532 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122538 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122544 | orchestrator | 2025-09-19 00:52:39.122550 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-19 00:52:39.122556 | orchestrator | Friday 19 September 2025 00:44:23 +0000 (0:00:00.640) 0:02:40.766 ****** 2025-09-19 00:52:39.122562 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122568 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122580 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.122586 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.122592 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.122598 | orchestrator | 2025-09-19 00:52:39.122604 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-19 00:52:39.122610 | orchestrator | Friday 19 September 2025 00:44:26 +0000 (0:00:03.402) 0:02:44.168 ****** 2025-09-19 00:52:39.122616 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122622 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122628 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122634 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.122640 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.122646 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.122652 | orchestrator | 2025-09-19 00:52:39.122659 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-19 00:52:39.122665 | orchestrator | Friday 19 September 2025 00:44:27 +0000 (0:00:00.513) 0:02:44.682 ****** 2025-09-19 00:52:39.122671 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122677 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122683 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122689 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.122695 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.122701 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.122711 | orchestrator | 2025-09-19 00:52:39.122717 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-19 00:52:39.122724 | orchestrator | Friday 19 September 2025 00:44:27 +0000 (0:00:00.736) 0:02:45.419 ****** 2025-09-19 00:52:39.122730 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122736 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122742 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122748 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122754 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122760 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122766 | orchestrator | 2025-09-19 00:52:39.122772 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-19 00:52:39.122778 | orchestrator | Friday 19 September 2025 00:44:28 +0000 (0:00:00.611) 0:02:46.031 ****** 2025-09-19 00:52:39.122784 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122790 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122796 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122802 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122812 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122818 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.122824 | orchestrator | 2025-09-19 00:52:39.122830 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-19 00:52:39.122853 | orchestrator | Friday 19 September 2025 00:44:29 +0000 (0:00:00.803) 0:02:46.834 ****** 2025-09-19 00:52:39.122860 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.122903 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.122910 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.122917 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-19 00:52:39.122925 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-19 00:52:39.122932 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.122939 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-19 00:52:39.122945 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-19 00:52:39.122951 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.122958 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-19 00:52:39.122964 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-19 00:52:39.122977 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.122983 | orchestrator | 2025-09-19 00:52:39.122990 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-19 00:52:39.122996 | orchestrator | Friday 19 September 2025 00:44:29 +0000 (0:00:00.643) 0:02:47.477 ****** 2025-09-19 00:52:39.123002 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123008 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123014 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123020 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123026 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123032 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123038 | orchestrator | 2025-09-19 00:52:39.123044 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-19 00:52:39.123050 | orchestrator | Friday 19 September 2025 00:44:30 +0000 (0:00:00.701) 0:02:48.178 ****** 2025-09-19 00:52:39.123056 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123062 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123068 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123074 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123080 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123086 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123092 | orchestrator | 2025-09-19 00:52:39.123098 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 00:52:39.123104 | orchestrator | Friday 19 September 2025 00:44:31 +0000 (0:00:00.544) 0:02:48.723 ****** 2025-09-19 00:52:39.123110 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123116 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123122 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123128 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123134 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123139 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123144 | orchestrator | 2025-09-19 00:52:39.123149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 00:52:39.123154 | orchestrator | Friday 19 September 2025 00:44:31 +0000 (0:00:00.695) 0:02:49.418 ****** 2025-09-19 00:52:39.123160 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123170 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123175 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123181 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123186 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123191 | orchestrator | 2025-09-19 00:52:39.123200 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 00:52:39.123205 | orchestrator | Friday 19 September 2025 00:44:32 +0000 (0:00:00.651) 0:02:50.070 ****** 2025-09-19 00:52:39.123211 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123216 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123221 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123227 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123254 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123259 | orchestrator | 2025-09-19 00:52:39.123265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 00:52:39.123270 | orchestrator | Friday 19 September 2025 00:44:33 +0000 (0:00:00.930) 0:02:51.000 ****** 2025-09-19 00:52:39.123275 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123280 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123286 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123291 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.123296 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.123306 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.123311 | orchestrator | 2025-09-19 00:52:39.123317 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 00:52:39.123322 | orchestrator | Friday 19 September 2025 00:44:34 +0000 (0:00:00.909) 0:02:51.910 ****** 2025-09-19 00:52:39.123327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 00:52:39.123333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 00:52:39.123338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 00:52:39.123343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123348 | orchestrator | 2025-09-19 00:52:39.123353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 00:52:39.123359 | orchestrator | Friday 19 September 2025 00:44:34 +0000 (0:00:00.494) 0:02:52.404 ****** 2025-09-19 00:52:39.123364 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 00:52:39.123369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 00:52:39.123374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 00:52:39.123380 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123385 | orchestrator | 2025-09-19 00:52:39.123390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 00:52:39.123395 | orchestrator | Friday 19 September 2025 00:44:35 +0000 (0:00:00.517) 0:02:52.922 ****** 2025-09-19 00:52:39.123401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 00:52:39.123406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 00:52:39.123411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 00:52:39.123416 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123422 | orchestrator | 2025-09-19 00:52:39.123427 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 00:52:39.123432 | orchestrator | Friday 19 September 2025 00:44:36 +0000 (0:00:00.732) 0:02:53.654 ****** 2025-09-19 00:52:39.123437 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123443 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123453 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.123458 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.123464 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.123469 | orchestrator | 2025-09-19 00:52:39.123474 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 00:52:39.123480 | orchestrator | Friday 19 September 2025 00:44:36 +0000 (0:00:00.580) 0:02:54.235 ****** 2025-09-19 00:52:39.123485 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-19 00:52:39.123490 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123495 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-19 00:52:39.123501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123506 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-19 00:52:39.123511 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123516 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 00:52:39.123521 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 00:52:39.123527 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 00:52:39.123532 | orchestrator | 2025-09-19 00:52:39.123537 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-19 00:52:39.123542 | orchestrator | Friday 19 September 2025 00:44:38 +0000 (0:00:02.074) 0:02:56.309 ****** 2025-09-19 00:52:39.123548 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.123553 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.123558 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.123563 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.123569 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.123574 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.123579 | orchestrator | 2025-09-19 00:52:39.123589 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 00:52:39.123594 | orchestrator | Friday 19 September 2025 00:44:41 +0000 (0:00:03.111) 0:02:59.421 ****** 2025-09-19 00:52:39.123600 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.123605 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.123610 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.123615 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.123620 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.123626 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.123631 | orchestrator | 2025-09-19 00:52:39.123636 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 00:52:39.123641 | orchestrator | Friday 19 September 2025 00:44:43 +0000 (0:00:01.278) 0:03:00.699 ****** 2025-09-19 00:52:39.123647 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123657 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123662 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.123668 | orchestrator | 2025-09-19 00:52:39.123676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 00:52:39.123682 | orchestrator | Friday 19 September 2025 00:44:44 +0000 (0:00:01.031) 0:03:01.731 ****** 2025-09-19 00:52:39.123687 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.123692 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.123697 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.123703 | orchestrator | 2025-09-19 00:52:39.123708 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 00:52:39.123728 | orchestrator | Friday 19 September 2025 00:44:44 +0000 (0:00:00.321) 0:03:02.052 ****** 2025-09-19 00:52:39.123734 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.123739 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.123744 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.123750 | orchestrator | 2025-09-19 00:52:39.123755 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 00:52:39.123760 | orchestrator | Friday 19 September 2025 00:44:45 +0000 (0:00:01.220) 0:03:03.273 ****** 2025-09-19 00:52:39.123766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:52:39.123771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:52:39.123776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:52:39.123782 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123787 | orchestrator | 2025-09-19 00:52:39.123792 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 00:52:39.123797 | orchestrator | Friday 19 September 2025 00:44:46 +0000 (0:00:00.758) 0:03:04.031 ****** 2025-09-19 00:52:39.123803 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.123808 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.123813 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.123818 | orchestrator | 2025-09-19 00:52:39.123824 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 00:52:39.123829 | orchestrator | Friday 19 September 2025 00:44:46 +0000 (0:00:00.514) 0:03:04.546 ****** 2025-09-19 00:52:39.123834 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.123839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.123844 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.123850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.123855 | orchestrator | 2025-09-19 00:52:39.123860 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 00:52:39.123877 | orchestrator | Friday 19 September 2025 00:44:47 +0000 (0:00:00.896) 0:03:05.442 ****** 2025-09-19 00:52:39.123883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.123889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.123898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.123903 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123909 | orchestrator | 2025-09-19 00:52:39.123914 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 00:52:39.123919 | orchestrator | Friday 19 September 2025 00:44:48 +0000 (0:00:00.497) 0:03:05.939 ****** 2025-09-19 00:52:39.123924 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123930 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123935 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123940 | orchestrator | 2025-09-19 00:52:39.123946 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 00:52:39.123951 | orchestrator | Friday 19 September 2025 00:44:48 +0000 (0:00:00.492) 0:03:06.431 ****** 2025-09-19 00:52:39.123956 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123962 | orchestrator | 2025-09-19 00:52:39.123967 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 00:52:39.123972 | orchestrator | Friday 19 September 2025 00:44:48 +0000 (0:00:00.199) 0:03:06.631 ****** 2025-09-19 00:52:39.123978 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.123983 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.123988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.123993 | orchestrator | 2025-09-19 00:52:39.123998 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 00:52:39.124004 | orchestrator | Friday 19 September 2025 00:44:49 +0000 (0:00:00.331) 0:03:06.962 ****** 2025-09-19 00:52:39.124009 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124014 | orchestrator | 2025-09-19 00:52:39.124020 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 00:52:39.124025 | orchestrator | Friday 19 September 2025 00:44:49 +0000 (0:00:00.187) 0:03:07.150 ****** 2025-09-19 00:52:39.124030 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124036 | orchestrator | 2025-09-19 00:52:39.124041 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 00:52:39.124046 | orchestrator | Friday 19 September 2025 00:44:49 +0000 (0:00:00.224) 0:03:07.374 ****** 2025-09-19 00:52:39.124052 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124057 | orchestrator | 2025-09-19 00:52:39.124062 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 00:52:39.124068 | orchestrator | Friday 19 September 2025 00:44:49 +0000 (0:00:00.117) 0:03:07.492 ****** 2025-09-19 00:52:39.124073 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124078 | orchestrator | 2025-09-19 00:52:39.124083 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 00:52:39.124089 | orchestrator | Friday 19 September 2025 00:44:50 +0000 (0:00:00.212) 0:03:07.705 ****** 2025-09-19 00:52:39.124094 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124099 | orchestrator | 2025-09-19 00:52:39.124105 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 00:52:39.124110 | orchestrator | Friday 19 September 2025 00:44:50 +0000 (0:00:00.199) 0:03:07.905 ****** 2025-09-19 00:52:39.124115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.124120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.124126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.124131 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124136 | orchestrator | 2025-09-19 00:52:39.124144 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 00:52:39.124150 | orchestrator | Friday 19 September 2025 00:44:50 +0000 (0:00:00.619) 0:03:08.524 ****** 2025-09-19 00:52:39.124155 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124160 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.124165 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.124170 | orchestrator | 2025-09-19 00:52:39.124194 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 00:52:39.124200 | orchestrator | Friday 19 September 2025 00:44:51 +0000 (0:00:00.582) 0:03:09.106 ****** 2025-09-19 00:52:39.124206 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124211 | orchestrator | 2025-09-19 00:52:39.124217 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 00:52:39.124222 | orchestrator | Friday 19 September 2025 00:44:51 +0000 (0:00:00.203) 0:03:09.309 ****** 2025-09-19 00:52:39.124227 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124233 | orchestrator | 2025-09-19 00:52:39.124238 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 00:52:39.124243 | orchestrator | Friday 19 September 2025 00:44:51 +0000 (0:00:00.202) 0:03:09.512 ****** 2025-09-19 00:52:39.124248 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124254 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.124259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.124264 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.124270 | orchestrator | 2025-09-19 00:52:39.124275 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 00:52:39.124280 | orchestrator | Friday 19 September 2025 00:44:52 +0000 (0:00:01.034) 0:03:10.546 ****** 2025-09-19 00:52:39.124285 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.124290 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.124296 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.124301 | orchestrator | 2025-09-19 00:52:39.124306 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 00:52:39.124311 | orchestrator | Friday 19 September 2025 00:44:53 +0000 (0:00:00.302) 0:03:10.849 ****** 2025-09-19 00:52:39.124317 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.124322 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.124327 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.124333 | orchestrator | 2025-09-19 00:52:39.124338 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 00:52:39.124343 | orchestrator | Friday 19 September 2025 00:44:54 +0000 (0:00:01.343) 0:03:12.192 ****** 2025-09-19 00:52:39.124348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.124354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.124359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.124364 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124369 | orchestrator | 2025-09-19 00:52:39.124375 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 00:52:39.124380 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.836) 0:03:13.029 ****** 2025-09-19 00:52:39.124385 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.124390 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.124396 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.124401 | orchestrator | 2025-09-19 00:52:39.124406 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 00:52:39.124411 | orchestrator | Friday 19 September 2025 00:44:55 +0000 (0:00:00.365) 0:03:13.394 ****** 2025-09-19 00:52:39.124416 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124422 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.124427 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.124432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.124438 | orchestrator | 2025-09-19 00:52:39.124443 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 00:52:39.124448 | orchestrator | Friday 19 September 2025 00:44:56 +0000 (0:00:00.946) 0:03:14.341 ****** 2025-09-19 00:52:39.124453 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.124459 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.124469 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.124474 | orchestrator | 2025-09-19 00:52:39.124479 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 00:52:39.124485 | orchestrator | Friday 19 September 2025 00:44:57 +0000 (0:00:00.328) 0:03:14.669 ****** 2025-09-19 00:52:39.124490 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.124495 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.124501 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.124506 | orchestrator | 2025-09-19 00:52:39.124511 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 00:52:39.124516 | orchestrator | Friday 19 September 2025 00:44:58 +0000 (0:00:01.532) 0:03:16.202 ****** 2025-09-19 00:52:39.124522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.124527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.124532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.124537 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124543 | orchestrator | 2025-09-19 00:52:39.124548 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 00:52:39.124553 | orchestrator | Friday 19 September 2025 00:44:59 +0000 (0:00:00.569) 0:03:16.771 ****** 2025-09-19 00:52:39.124559 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.124564 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.124569 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.124574 | orchestrator | 2025-09-19 00:52:39.124580 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-19 00:52:39.124585 | orchestrator | Friday 19 September 2025 00:44:59 +0000 (0:00:00.382) 0:03:17.154 ****** 2025-09-19 00:52:39.124590 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124595 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.124601 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.124609 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124614 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.124619 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.124625 | orchestrator | 2025-09-19 00:52:39.124630 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 00:52:39.124635 | orchestrator | Friday 19 September 2025 00:45:00 +0000 (0:00:00.889) 0:03:18.044 ****** 2025-09-19 00:52:39.124655 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.124661 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.124666 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.124671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.124677 | orchestrator | 2025-09-19 00:52:39.124682 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 00:52:39.124687 | orchestrator | Friday 19 September 2025 00:45:01 +0000 (0:00:00.753) 0:03:18.797 ****** 2025-09-19 00:52:39.124693 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.124698 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.124703 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.124708 | orchestrator | 2025-09-19 00:52:39.124714 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 00:52:39.124719 | orchestrator | Friday 19 September 2025 00:45:01 +0000 (0:00:00.377) 0:03:19.175 ****** 2025-09-19 00:52:39.124724 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.124730 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.124735 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.124740 | orchestrator | 2025-09-19 00:52:39.124746 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 00:52:39.124751 | orchestrator | Friday 19 September 2025 00:45:02 +0000 (0:00:01.229) 0:03:20.404 ****** 2025-09-19 00:52:39.124756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:52:39.124761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:52:39.124771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:52:39.124776 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124782 | orchestrator | 2025-09-19 00:52:39.124787 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 00:52:39.124792 | orchestrator | Friday 19 September 2025 00:45:03 +0000 (0:00:00.668) 0:03:21.072 ****** 2025-09-19 00:52:39.124798 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.124803 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.124808 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.124813 | orchestrator | 2025-09-19 00:52:39.124819 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-19 00:52:39.124824 | orchestrator | 2025-09-19 00:52:39.124829 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.124835 | orchestrator | Friday 19 September 2025 00:45:03 +0000 (0:00:00.482) 0:03:21.554 ****** 2025-09-19 00:52:39.124840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.124845 | orchestrator | 2025-09-19 00:52:39.124851 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.124856 | orchestrator | Friday 19 September 2025 00:45:04 +0000 (0:00:00.686) 0:03:22.240 ****** 2025-09-19 00:52:39.124861 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.124879 | orchestrator | 2025-09-19 00:52:39.124884 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.124890 | orchestrator | Friday 19 September 2025 00:45:05 +0000 (0:00:00.501) 0:03:22.742 ****** 2025-09-19 00:52:39.124895 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.124900 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.124906 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.124911 | orchestrator | 2025-09-19 00:52:39.124917 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.124922 | orchestrator | Friday 19 September 2025 00:45:05 +0000 (0:00:00.795) 0:03:23.537 ****** 2025-09-19 00:52:39.124927 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124933 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.124938 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.124943 | orchestrator | 2025-09-19 00:52:39.124949 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.124954 | orchestrator | Friday 19 September 2025 00:45:06 +0000 (0:00:00.273) 0:03:23.811 ****** 2025-09-19 00:52:39.124959 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124965 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.124970 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.124975 | orchestrator | 2025-09-19 00:52:39.124981 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.124986 | orchestrator | Friday 19 September 2025 00:45:06 +0000 (0:00:00.292) 0:03:24.103 ****** 2025-09-19 00:52:39.124991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.124997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125002 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125008 | orchestrator | 2025-09-19 00:52:39.125013 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.125018 | orchestrator | Friday 19 September 2025 00:45:06 +0000 (0:00:00.303) 0:03:24.406 ****** 2025-09-19 00:52:39.125024 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125029 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125034 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125039 | orchestrator | 2025-09-19 00:52:39.125045 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.125050 | orchestrator | Friday 19 September 2025 00:45:07 +0000 (0:00:00.948) 0:03:25.355 ****** 2025-09-19 00:52:39.125055 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125067 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125072 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125077 | orchestrator | 2025-09-19 00:52:39.125083 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.125091 | orchestrator | Friday 19 September 2025 00:45:08 +0000 (0:00:00.419) 0:03:25.774 ****** 2025-09-19 00:52:39.125097 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125102 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125107 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125112 | orchestrator | 2025-09-19 00:52:39.125118 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.125138 | orchestrator | Friday 19 September 2025 00:45:08 +0000 (0:00:00.476) 0:03:26.251 ****** 2025-09-19 00:52:39.125144 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125149 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125155 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125160 | orchestrator | 2025-09-19 00:52:39.125165 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.125171 | orchestrator | Friday 19 September 2025 00:45:09 +0000 (0:00:00.779) 0:03:27.030 ****** 2025-09-19 00:52:39.125176 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125181 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125186 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125192 | orchestrator | 2025-09-19 00:52:39.125197 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.125202 | orchestrator | Friday 19 September 2025 00:45:10 +0000 (0:00:01.314) 0:03:28.345 ****** 2025-09-19 00:52:39.125208 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125218 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125224 | orchestrator | 2025-09-19 00:52:39.125229 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.125234 | orchestrator | Friday 19 September 2025 00:45:11 +0000 (0:00:00.362) 0:03:28.708 ****** 2025-09-19 00:52:39.125239 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125245 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125250 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125255 | orchestrator | 2025-09-19 00:52:39.125261 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.125266 | orchestrator | Friday 19 September 2025 00:45:12 +0000 (0:00:01.289) 0:03:29.997 ****** 2025-09-19 00:52:39.125271 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125276 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125287 | orchestrator | 2025-09-19 00:52:39.125292 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.125298 | orchestrator | Friday 19 September 2025 00:45:12 +0000 (0:00:00.367) 0:03:30.364 ****** 2025-09-19 00:52:39.125303 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125308 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125313 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125319 | orchestrator | 2025-09-19 00:52:39.125324 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.125329 | orchestrator | Friday 19 September 2025 00:45:13 +0000 (0:00:00.578) 0:03:30.943 ****** 2025-09-19 00:52:39.125335 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125340 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125345 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125351 | orchestrator | 2025-09-19 00:52:39.125356 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.125361 | orchestrator | Friday 19 September 2025 00:45:13 +0000 (0:00:00.289) 0:03:31.233 ****** 2025-09-19 00:52:39.125366 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125372 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125383 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125388 | orchestrator | 2025-09-19 00:52:39.125393 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.125399 | orchestrator | Friday 19 September 2025 00:45:14 +0000 (0:00:00.461) 0:03:31.694 ****** 2025-09-19 00:52:39.125404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125409 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.125415 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.125420 | orchestrator | 2025-09-19 00:52:39.125425 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.125431 | orchestrator | Friday 19 September 2025 00:45:14 +0000 (0:00:00.275) 0:03:31.970 ****** 2025-09-19 00:52:39.125436 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125441 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125446 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125452 | orchestrator | 2025-09-19 00:52:39.125457 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.125462 | orchestrator | Friday 19 September 2025 00:45:14 +0000 (0:00:00.407) 0:03:32.378 ****** 2025-09-19 00:52:39.125468 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125473 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125478 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125484 | orchestrator | 2025-09-19 00:52:39.125489 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.125494 | orchestrator | Friday 19 September 2025 00:45:15 +0000 (0:00:00.511) 0:03:32.889 ****** 2025-09-19 00:52:39.125500 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125505 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125510 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125516 | orchestrator | 2025-09-19 00:52:39.125521 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-19 00:52:39.125526 | orchestrator | Friday 19 September 2025 00:45:15 +0000 (0:00:00.510) 0:03:33.400 ****** 2025-09-19 00:52:39.125531 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125537 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125542 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125547 | orchestrator | 2025-09-19 00:52:39.125553 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-19 00:52:39.125558 | orchestrator | Friday 19 September 2025 00:45:16 +0000 (0:00:00.359) 0:03:33.759 ****** 2025-09-19 00:52:39.125563 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.125568 | orchestrator | 2025-09-19 00:52:39.125574 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-19 00:52:39.125579 | orchestrator | Friday 19 September 2025 00:45:16 +0000 (0:00:00.870) 0:03:34.629 ****** 2025-09-19 00:52:39.125588 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.125593 | orchestrator | 2025-09-19 00:52:39.125598 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-19 00:52:39.125604 | orchestrator | Friday 19 September 2025 00:45:17 +0000 (0:00:00.123) 0:03:34.753 ****** 2025-09-19 00:52:39.125609 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-19 00:52:39.125614 | orchestrator | 2025-09-19 00:52:39.125634 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-19 00:52:39.125640 | orchestrator | Friday 19 September 2025 00:45:17 +0000 (0:00:00.862) 0:03:35.616 ****** 2025-09-19 00:52:39.125645 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125651 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125656 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125661 | orchestrator | 2025-09-19 00:52:39.125667 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-19 00:52:39.125672 | orchestrator | Friday 19 September 2025 00:45:18 +0000 (0:00:00.449) 0:03:36.065 ****** 2025-09-19 00:52:39.125677 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125683 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125692 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125697 | orchestrator | 2025-09-19 00:52:39.125702 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-19 00:52:39.125708 | orchestrator | Friday 19 September 2025 00:45:18 +0000 (0:00:00.525) 0:03:36.590 ****** 2025-09-19 00:52:39.125713 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.125718 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.125724 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.125729 | orchestrator | 2025-09-19 00:52:39.125734 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-19 00:52:39.125739 | orchestrator | Friday 19 September 2025 00:45:20 +0000 (0:00:01.289) 0:03:37.880 ****** 2025-09-19 00:52:39.125745 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.125750 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.125755 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.125760 | orchestrator | 2025-09-19 00:52:39.125766 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-19 00:52:39.125771 | orchestrator | Friday 19 September 2025 00:45:21 +0000 (0:00:00.797) 0:03:38.677 ****** 2025-09-19 00:52:39.125776 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.125781 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.125787 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.125792 | orchestrator | 2025-09-19 00:52:39.125797 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-19 00:52:39.125802 | orchestrator | Friday 19 September 2025 00:45:21 +0000 (0:00:00.718) 0:03:39.396 ****** 2025-09-19 00:52:39.125808 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125813 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.125818 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.125823 | orchestrator | 2025-09-19 00:52:39.125829 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-19 00:52:39.125834 | orchestrator | Friday 19 September 2025 00:45:22 +0000 (0:00:01.036) 0:03:40.432 ****** 2025-09-19 00:52:39.125839 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.125844 | orchestrator | 2025-09-19 00:52:39.125850 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-19 00:52:39.125855 | orchestrator | Friday 19 September 2025 00:45:24 +0000 (0:00:01.280) 0:03:41.713 ****** 2025-09-19 00:52:39.125860 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.125877 | orchestrator | 2025-09-19 00:52:39.125883 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-19 00:52:39.125888 | orchestrator | Friday 19 September 2025 00:45:24 +0000 (0:00:00.675) 0:03:42.389 ****** 2025-09-19 00:52:39.125893 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:52:39.125899 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.125904 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.125909 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:52:39.125915 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-19 00:52:39.125920 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:52:39.125925 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:52:39.125931 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-19 00:52:39.125936 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-19 00:52:39.125941 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-19 00:52:39.125947 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:52:39.125952 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-19 00:52:39.125957 | orchestrator | 2025-09-19 00:52:39.125963 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-19 00:52:39.125968 | orchestrator | Friday 19 September 2025 00:45:28 +0000 (0:00:03.608) 0:03:45.997 ****** 2025-09-19 00:52:39.125977 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.125982 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.125987 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.125993 | orchestrator | 2025-09-19 00:52:39.125998 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-19 00:52:39.126003 | orchestrator | Friday 19 September 2025 00:45:29 +0000 (0:00:01.413) 0:03:47.410 ****** 2025-09-19 00:52:39.126009 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126029 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.126036 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.126042 | orchestrator | 2025-09-19 00:52:39.126047 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-19 00:52:39.126052 | orchestrator | Friday 19 September 2025 00:45:30 +0000 (0:00:00.427) 0:03:47.838 ****** 2025-09-19 00:52:39.126057 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126063 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.126068 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.126073 | orchestrator | 2025-09-19 00:52:39.126078 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-19 00:52:39.126084 | orchestrator | Friday 19 September 2025 00:45:30 +0000 (0:00:00.268) 0:03:48.107 ****** 2025-09-19 00:52:39.126089 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.126094 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.126100 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.126105 | orchestrator | 2025-09-19 00:52:39.126110 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-19 00:52:39.126131 | orchestrator | Friday 19 September 2025 00:45:32 +0000 (0:00:01.610) 0:03:49.717 ****** 2025-09-19 00:52:39.126138 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.126143 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.126148 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.126154 | orchestrator | 2025-09-19 00:52:39.126159 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-19 00:52:39.126164 | orchestrator | Friday 19 September 2025 00:45:33 +0000 (0:00:01.340) 0:03:51.058 ****** 2025-09-19 00:52:39.126170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126180 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126185 | orchestrator | 2025-09-19 00:52:39.126191 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-19 00:52:39.126196 | orchestrator | Friday 19 September 2025 00:45:33 +0000 (0:00:00.286) 0:03:51.344 ****** 2025-09-19 00:52:39.126201 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.126207 | orchestrator | 2025-09-19 00:52:39.126212 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-19 00:52:39.126217 | orchestrator | Friday 19 September 2025 00:45:34 +0000 (0:00:00.643) 0:03:51.987 ****** 2025-09-19 00:52:39.126223 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126228 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126233 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126238 | orchestrator | 2025-09-19 00:52:39.126244 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-19 00:52:39.126249 | orchestrator | Friday 19 September 2025 00:45:34 +0000 (0:00:00.260) 0:03:52.247 ****** 2025-09-19 00:52:39.126254 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126260 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126265 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126270 | orchestrator | 2025-09-19 00:52:39.126276 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-19 00:52:39.126281 | orchestrator | Friday 19 September 2025 00:45:34 +0000 (0:00:00.263) 0:03:52.511 ****** 2025-09-19 00:52:39.126286 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.126297 | orchestrator | 2025-09-19 00:52:39.126303 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-19 00:52:39.126308 | orchestrator | Friday 19 September 2025 00:45:35 +0000 (0:00:00.647) 0:03:53.158 ****** 2025-09-19 00:52:39.126313 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.126319 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.126324 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.126329 | orchestrator | 2025-09-19 00:52:39.126334 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-19 00:52:39.126340 | orchestrator | Friday 19 September 2025 00:45:37 +0000 (0:00:01.514) 0:03:54.672 ****** 2025-09-19 00:52:39.126345 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.126350 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.126355 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.126361 | orchestrator | 2025-09-19 00:52:39.126366 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-19 00:52:39.126371 | orchestrator | Friday 19 September 2025 00:45:38 +0000 (0:00:01.196) 0:03:55.869 ****** 2025-09-19 00:52:39.126377 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.126382 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.126387 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.126392 | orchestrator | 2025-09-19 00:52:39.126398 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-19 00:52:39.126403 | orchestrator | Friday 19 September 2025 00:45:40 +0000 (0:00:01.971) 0:03:57.841 ****** 2025-09-19 00:52:39.126408 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.126414 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.126419 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.126424 | orchestrator | 2025-09-19 00:52:39.126429 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-19 00:52:39.126435 | orchestrator | Friday 19 September 2025 00:45:42 +0000 (0:00:02.065) 0:03:59.907 ****** 2025-09-19 00:52:39.126440 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.126445 | orchestrator | 2025-09-19 00:52:39.126451 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-19 00:52:39.126475 | orchestrator | Friday 19 September 2025 00:45:42 +0000 (0:00:00.558) 0:04:00.466 ****** 2025-09-19 00:52:39.126481 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-19 00:52:39.126486 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126492 | orchestrator | 2025-09-19 00:52:39.126497 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-19 00:52:39.126503 | orchestrator | Friday 19 September 2025 00:46:05 +0000 (0:00:22.219) 0:04:22.685 ****** 2025-09-19 00:52:39.126508 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126513 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.126519 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.126524 | orchestrator | 2025-09-19 00:52:39.126529 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-19 00:52:39.126534 | orchestrator | Friday 19 September 2025 00:46:15 +0000 (0:00:10.513) 0:04:33.199 ****** 2025-09-19 00:52:39.126540 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126545 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126550 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126556 | orchestrator | 2025-09-19 00:52:39.126564 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-19 00:52:39.126569 | orchestrator | Friday 19 September 2025 00:46:15 +0000 (0:00:00.317) 0:04:33.516 ****** 2025-09-19 00:52:39.126591 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-19 00:52:39.126603 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-19 00:52:39.126610 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-19 00:52:39.126616 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-19 00:52:39.126622 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-19 00:52:39.126628 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e5520fa35ff32c0522e6bb3c801163652ee775b0'}])  2025-09-19 00:52:39.126634 | orchestrator | 2025-09-19 00:52:39.126639 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 00:52:39.126645 | orchestrator | Friday 19 September 2025 00:46:31 +0000 (0:00:15.161) 0:04:48.678 ****** 2025-09-19 00:52:39.126650 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126655 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126661 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126666 | orchestrator | 2025-09-19 00:52:39.126672 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 00:52:39.126677 | orchestrator | Friday 19 September 2025 00:46:31 +0000 (0:00:00.365) 0:04:49.043 ****** 2025-09-19 00:52:39.126682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.126688 | orchestrator | 2025-09-19 00:52:39.126693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 00:52:39.126698 | orchestrator | Friday 19 September 2025 00:46:31 +0000 (0:00:00.572) 0:04:49.616 ****** 2025-09-19 00:52:39.126704 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126709 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.126714 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.126719 | orchestrator | 2025-09-19 00:52:39.126725 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 00:52:39.126730 | orchestrator | Friday 19 September 2025 00:46:32 +0000 (0:00:00.585) 0:04:50.201 ****** 2025-09-19 00:52:39.126735 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126741 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126746 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126751 | orchestrator | 2025-09-19 00:52:39.126756 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 00:52:39.126767 | orchestrator | Friday 19 September 2025 00:46:32 +0000 (0:00:00.409) 0:04:50.610 ****** 2025-09-19 00:52:39.126772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:52:39.126778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:52:39.126783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:52:39.126788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126793 | orchestrator | 2025-09-19 00:52:39.126799 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 00:52:39.126807 | orchestrator | Friday 19 September 2025 00:46:33 +0000 (0:00:00.706) 0:04:51.316 ****** 2025-09-19 00:52:39.126813 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126818 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.126823 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.126829 | orchestrator | 2025-09-19 00:52:39.126834 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-19 00:52:39.126839 | orchestrator | 2025-09-19 00:52:39.126845 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.126892 | orchestrator | Friday 19 September 2025 00:46:34 +0000 (0:00:00.660) 0:04:51.977 ****** 2025-09-19 00:52:39.126900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.126905 | orchestrator | 2025-09-19 00:52:39.126910 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.126916 | orchestrator | Friday 19 September 2025 00:46:35 +0000 (0:00:00.779) 0:04:52.756 ****** 2025-09-19 00:52:39.126921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.126926 | orchestrator | 2025-09-19 00:52:39.126932 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.126937 | orchestrator | Friday 19 September 2025 00:46:35 +0000 (0:00:00.545) 0:04:53.302 ****** 2025-09-19 00:52:39.126942 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.126948 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.126953 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.126958 | orchestrator | 2025-09-19 00:52:39.126963 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.126969 | orchestrator | Friday 19 September 2025 00:46:36 +0000 (0:00:00.994) 0:04:54.296 ****** 2025-09-19 00:52:39.126974 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.126979 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.126985 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.126990 | orchestrator | 2025-09-19 00:52:39.126995 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.127001 | orchestrator | Friday 19 September 2025 00:46:36 +0000 (0:00:00.305) 0:04:54.602 ****** 2025-09-19 00:52:39.127006 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127011 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127016 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127022 | orchestrator | 2025-09-19 00:52:39.127027 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.127032 | orchestrator | Friday 19 September 2025 00:46:37 +0000 (0:00:00.287) 0:04:54.889 ****** 2025-09-19 00:52:39.127038 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127043 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127048 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127053 | orchestrator | 2025-09-19 00:52:39.127058 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.127063 | orchestrator | Friday 19 September 2025 00:46:37 +0000 (0:00:00.305) 0:04:55.195 ****** 2025-09-19 00:52:39.127068 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127072 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127077 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127082 | orchestrator | 2025-09-19 00:52:39.127091 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.127095 | orchestrator | Friday 19 September 2025 00:46:38 +0000 (0:00:01.059) 0:04:56.254 ****** 2025-09-19 00:52:39.127100 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127105 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127110 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127114 | orchestrator | 2025-09-19 00:52:39.127119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.127124 | orchestrator | Friday 19 September 2025 00:46:38 +0000 (0:00:00.362) 0:04:56.617 ****** 2025-09-19 00:52:39.127128 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127133 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127138 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127142 | orchestrator | 2025-09-19 00:52:39.127147 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.127152 | orchestrator | Friday 19 September 2025 00:46:39 +0000 (0:00:00.333) 0:04:56.951 ****** 2025-09-19 00:52:39.127157 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127162 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127166 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127171 | orchestrator | 2025-09-19 00:52:39.127176 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.127181 | orchestrator | Friday 19 September 2025 00:46:40 +0000 (0:00:00.733) 0:04:57.684 ****** 2025-09-19 00:52:39.127185 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127190 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127195 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127199 | orchestrator | 2025-09-19 00:52:39.127204 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.127209 | orchestrator | Friday 19 September 2025 00:46:41 +0000 (0:00:01.114) 0:04:58.799 ****** 2025-09-19 00:52:39.127214 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127218 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127223 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127228 | orchestrator | 2025-09-19 00:52:39.127233 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.127237 | orchestrator | Friday 19 September 2025 00:46:41 +0000 (0:00:00.359) 0:04:59.159 ****** 2025-09-19 00:52:39.127242 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127247 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127251 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127256 | orchestrator | 2025-09-19 00:52:39.127261 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.127266 | orchestrator | Friday 19 September 2025 00:46:41 +0000 (0:00:00.374) 0:04:59.533 ****** 2025-09-19 00:52:39.127270 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127280 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127284 | orchestrator | 2025-09-19 00:52:39.127292 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.127297 | orchestrator | Friday 19 September 2025 00:46:42 +0000 (0:00:00.313) 0:04:59.847 ****** 2025-09-19 00:52:39.127302 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127306 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127311 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127316 | orchestrator | 2025-09-19 00:52:39.127321 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.127339 | orchestrator | Friday 19 September 2025 00:46:42 +0000 (0:00:00.531) 0:05:00.378 ****** 2025-09-19 00:52:39.127345 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127349 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127354 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127359 | orchestrator | 2025-09-19 00:52:39.127363 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.127372 | orchestrator | Friday 19 September 2025 00:46:43 +0000 (0:00:00.345) 0:05:00.724 ****** 2025-09-19 00:52:39.127376 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127381 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127386 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127390 | orchestrator | 2025-09-19 00:52:39.127395 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.127400 | orchestrator | Friday 19 September 2025 00:46:43 +0000 (0:00:00.302) 0:05:01.027 ****** 2025-09-19 00:52:39.127404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127409 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127414 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127419 | orchestrator | 2025-09-19 00:52:39.127423 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.127428 | orchestrator | Friday 19 September 2025 00:46:43 +0000 (0:00:00.283) 0:05:01.310 ****** 2025-09-19 00:52:39.127433 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127437 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127442 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127447 | orchestrator | 2025-09-19 00:52:39.127451 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.127456 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.569) 0:05:01.880 ****** 2025-09-19 00:52:39.127461 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127465 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127470 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127475 | orchestrator | 2025-09-19 00:52:39.127479 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.127484 | orchestrator | Friday 19 September 2025 00:46:44 +0000 (0:00:00.324) 0:05:02.204 ****** 2025-09-19 00:52:39.127489 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127493 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127498 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127503 | orchestrator | 2025-09-19 00:52:39.127507 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-19 00:52:39.127512 | orchestrator | Friday 19 September 2025 00:46:45 +0000 (0:00:00.574) 0:05:02.779 ****** 2025-09-19 00:52:39.127517 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:52:39.127521 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:52:39.127526 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:52:39.127531 | orchestrator | 2025-09-19 00:52:39.127535 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-19 00:52:39.127540 | orchestrator | Friday 19 September 2025 00:46:46 +0000 (0:00:00.878) 0:05:03.657 ****** 2025-09-19 00:52:39.127545 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.127550 | orchestrator | 2025-09-19 00:52:39.127554 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-19 00:52:39.127559 | orchestrator | Friday 19 September 2025 00:46:46 +0000 (0:00:00.849) 0:05:04.507 ****** 2025-09-19 00:52:39.127564 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.127568 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.127573 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.127578 | orchestrator | 2025-09-19 00:52:39.127582 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-19 00:52:39.127587 | orchestrator | Friday 19 September 2025 00:46:47 +0000 (0:00:00.669) 0:05:05.177 ****** 2025-09-19 00:52:39.127592 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127596 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127601 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127606 | orchestrator | 2025-09-19 00:52:39.127610 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-19 00:52:39.127621 | orchestrator | Friday 19 September 2025 00:46:47 +0000 (0:00:00.317) 0:05:05.494 ****** 2025-09-19 00:52:39.127625 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:52:39.127630 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:52:39.127635 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:52:39.127639 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-19 00:52:39.127644 | orchestrator | 2025-09-19 00:52:39.127649 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-19 00:52:39.127653 | orchestrator | Friday 19 September 2025 00:46:59 +0000 (0:00:11.244) 0:05:16.738 ****** 2025-09-19 00:52:39.127658 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127663 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127667 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127672 | orchestrator | 2025-09-19 00:52:39.127677 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-19 00:52:39.127681 | orchestrator | Friday 19 September 2025 00:46:59 +0000 (0:00:00.617) 0:05:17.356 ****** 2025-09-19 00:52:39.127686 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 00:52:39.127691 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 00:52:39.127696 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 00:52:39.127700 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 00:52:39.127708 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.127713 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.127717 | orchestrator | 2025-09-19 00:52:39.127722 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-19 00:52:39.127727 | orchestrator | Friday 19 September 2025 00:47:01 +0000 (0:00:02.213) 0:05:19.570 ****** 2025-09-19 00:52:39.127744 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 00:52:39.127750 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 00:52:39.127754 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 00:52:39.127759 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:52:39.127764 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 00:52:39.127768 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 00:52:39.127773 | orchestrator | 2025-09-19 00:52:39.127778 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-19 00:52:39.127782 | orchestrator | Friday 19 September 2025 00:47:03 +0000 (0:00:01.212) 0:05:20.782 ****** 2025-09-19 00:52:39.127787 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.127792 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.127796 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.127801 | orchestrator | 2025-09-19 00:52:39.127806 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-19 00:52:39.127810 | orchestrator | Friday 19 September 2025 00:47:03 +0000 (0:00:00.715) 0:05:21.497 ****** 2025-09-19 00:52:39.127815 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127820 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127824 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127829 | orchestrator | 2025-09-19 00:52:39.127834 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-19 00:52:39.127839 | orchestrator | Friday 19 September 2025 00:47:04 +0000 (0:00:00.571) 0:05:22.068 ****** 2025-09-19 00:52:39.127843 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127848 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127853 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127857 | orchestrator | 2025-09-19 00:52:39.127862 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-19 00:52:39.127875 | orchestrator | Friday 19 September 2025 00:47:04 +0000 (0:00:00.376) 0:05:22.445 ****** 2025-09-19 00:52:39.127880 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.127891 | orchestrator | 2025-09-19 00:52:39.127896 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-19 00:52:39.127900 | orchestrator | Friday 19 September 2025 00:47:05 +0000 (0:00:00.528) 0:05:22.973 ****** 2025-09-19 00:52:39.127905 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127910 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127914 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127919 | orchestrator | 2025-09-19 00:52:39.127924 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-19 00:52:39.127928 | orchestrator | Friday 19 September 2025 00:47:05 +0000 (0:00:00.538) 0:05:23.512 ****** 2025-09-19 00:52:39.127933 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.127938 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.127942 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.127947 | orchestrator | 2025-09-19 00:52:39.127952 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-19 00:52:39.127956 | orchestrator | Friday 19 September 2025 00:47:06 +0000 (0:00:00.354) 0:05:23.867 ****** 2025-09-19 00:52:39.127961 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.127966 | orchestrator | 2025-09-19 00:52:39.127971 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-19 00:52:39.127975 | orchestrator | Friday 19 September 2025 00:47:06 +0000 (0:00:00.504) 0:05:24.372 ****** 2025-09-19 00:52:39.127980 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.127985 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.127989 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.127994 | orchestrator | 2025-09-19 00:52:39.127999 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-19 00:52:39.128004 | orchestrator | Friday 19 September 2025 00:47:08 +0000 (0:00:01.724) 0:05:26.097 ****** 2025-09-19 00:52:39.128009 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.128013 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.128018 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.128023 | orchestrator | 2025-09-19 00:52:39.128028 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-19 00:52:39.128032 | orchestrator | Friday 19 September 2025 00:47:09 +0000 (0:00:01.306) 0:05:27.403 ****** 2025-09-19 00:52:39.128037 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.128042 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.128046 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.128051 | orchestrator | 2025-09-19 00:52:39.128056 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-19 00:52:39.128061 | orchestrator | Friday 19 September 2025 00:47:11 +0000 (0:00:01.787) 0:05:29.191 ****** 2025-09-19 00:52:39.128065 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.128070 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.128075 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.128080 | orchestrator | 2025-09-19 00:52:39.128084 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-19 00:52:39.128089 | orchestrator | Friday 19 September 2025 00:47:13 +0000 (0:00:02.035) 0:05:31.226 ****** 2025-09-19 00:52:39.128094 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.128098 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.128103 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-19 00:52:39.128108 | orchestrator | 2025-09-19 00:52:39.128113 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-19 00:52:39.128120 | orchestrator | Friday 19 September 2025 00:47:14 +0000 (0:00:00.667) 0:05:31.894 ****** 2025-09-19 00:52:39.128125 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-19 00:52:39.128130 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-19 00:52:39.128152 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-19 00:52:39.128158 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-19 00:52:39.128163 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-19 00:52:39.128167 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.128172 | orchestrator | 2025-09-19 00:52:39.128177 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-19 00:52:39.128182 | orchestrator | Friday 19 September 2025 00:47:44 +0000 (0:00:30.370) 0:06:02.264 ****** 2025-09-19 00:52:39.128186 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.128191 | orchestrator | 2025-09-19 00:52:39.128196 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-19 00:52:39.128200 | orchestrator | Friday 19 September 2025 00:47:46 +0000 (0:00:01.434) 0:06:03.698 ****** 2025-09-19 00:52:39.128205 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.128210 | orchestrator | 2025-09-19 00:52:39.128214 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-19 00:52:39.128219 | orchestrator | Friday 19 September 2025 00:47:46 +0000 (0:00:00.308) 0:06:04.007 ****** 2025-09-19 00:52:39.128224 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.128228 | orchestrator | 2025-09-19 00:52:39.128233 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-19 00:52:39.128238 | orchestrator | Friday 19 September 2025 00:47:46 +0000 (0:00:00.176) 0:06:04.183 ****** 2025-09-19 00:52:39.128242 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-19 00:52:39.128247 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-19 00:52:39.128252 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-19 00:52:39.128256 | orchestrator | 2025-09-19 00:52:39.128261 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-19 00:52:39.128266 | orchestrator | Friday 19 September 2025 00:47:53 +0000 (0:00:06.510) 0:06:10.693 ****** 2025-09-19 00:52:39.128270 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-19 00:52:39.128275 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-19 00:52:39.128280 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-19 00:52:39.128284 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-19 00:52:39.128289 | orchestrator | 2025-09-19 00:52:39.128294 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 00:52:39.128298 | orchestrator | Friday 19 September 2025 00:47:57 +0000 (0:00:04.889) 0:06:15.583 ****** 2025-09-19 00:52:39.128303 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.128308 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.128312 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.128317 | orchestrator | 2025-09-19 00:52:39.128322 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 00:52:39.128326 | orchestrator | Friday 19 September 2025 00:47:58 +0000 (0:00:00.668) 0:06:16.251 ****** 2025-09-19 00:52:39.128331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:52:39.128336 | orchestrator | 2025-09-19 00:52:39.128340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 00:52:39.128345 | orchestrator | Friday 19 September 2025 00:47:59 +0000 (0:00:00.547) 0:06:16.798 ****** 2025-09-19 00:52:39.128350 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.128354 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.128359 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.128367 | orchestrator | 2025-09-19 00:52:39.128372 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 00:52:39.128377 | orchestrator | Friday 19 September 2025 00:47:59 +0000 (0:00:00.616) 0:06:17.415 ****** 2025-09-19 00:52:39.128381 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.128386 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.128391 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.128395 | orchestrator | 2025-09-19 00:52:39.128400 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 00:52:39.128405 | orchestrator | Friday 19 September 2025 00:48:00 +0000 (0:00:01.143) 0:06:18.558 ****** 2025-09-19 00:52:39.128409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 00:52:39.128414 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 00:52:39.128419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 00:52:39.128423 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.128428 | orchestrator | 2025-09-19 00:52:39.128433 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 00:52:39.128438 | orchestrator | Friday 19 September 2025 00:48:01 +0000 (0:00:00.603) 0:06:19.162 ****** 2025-09-19 00:52:39.128442 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.128447 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.128452 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.128456 | orchestrator | 2025-09-19 00:52:39.128461 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-19 00:52:39.128466 | orchestrator | 2025-09-19 00:52:39.128470 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.128478 | orchestrator | Friday 19 September 2025 00:48:02 +0000 (0:00:00.785) 0:06:19.948 ****** 2025-09-19 00:52:39.128483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.128487 | orchestrator | 2025-09-19 00:52:39.128492 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.128509 | orchestrator | Friday 19 September 2025 00:48:02 +0000 (0:00:00.535) 0:06:20.483 ****** 2025-09-19 00:52:39.128515 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.128520 | orchestrator | 2025-09-19 00:52:39.128524 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.128529 | orchestrator | Friday 19 September 2025 00:48:03 +0000 (0:00:00.749) 0:06:21.233 ****** 2025-09-19 00:52:39.128534 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128538 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128543 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128548 | orchestrator | 2025-09-19 00:52:39.128552 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.128557 | orchestrator | Friday 19 September 2025 00:48:03 +0000 (0:00:00.396) 0:06:21.629 ****** 2025-09-19 00:52:39.128562 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128566 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128571 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128576 | orchestrator | 2025-09-19 00:52:39.128580 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.128585 | orchestrator | Friday 19 September 2025 00:48:04 +0000 (0:00:00.687) 0:06:22.316 ****** 2025-09-19 00:52:39.128590 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128594 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128599 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128604 | orchestrator | 2025-09-19 00:52:39.128608 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.128613 | orchestrator | Friday 19 September 2025 00:48:05 +0000 (0:00:00.715) 0:06:23.032 ****** 2025-09-19 00:52:39.128618 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128622 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128632 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128636 | orchestrator | 2025-09-19 00:52:39.128641 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.128646 | orchestrator | Friday 19 September 2025 00:48:06 +0000 (0:00:00.709) 0:06:23.743 ****** 2025-09-19 00:52:39.128650 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128655 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128660 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128664 | orchestrator | 2025-09-19 00:52:39.128669 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.128674 | orchestrator | Friday 19 September 2025 00:48:06 +0000 (0:00:00.582) 0:06:24.325 ****** 2025-09-19 00:52:39.128679 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128683 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128688 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128693 | orchestrator | 2025-09-19 00:52:39.128697 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.128702 | orchestrator | Friday 19 September 2025 00:48:06 +0000 (0:00:00.302) 0:06:24.627 ****** 2025-09-19 00:52:39.128707 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128711 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128721 | orchestrator | 2025-09-19 00:52:39.128725 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.128730 | orchestrator | Friday 19 September 2025 00:48:07 +0000 (0:00:00.281) 0:06:24.909 ****** 2025-09-19 00:52:39.128735 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128739 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128744 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128749 | orchestrator | 2025-09-19 00:52:39.128753 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.128758 | orchestrator | Friday 19 September 2025 00:48:07 +0000 (0:00:00.727) 0:06:25.636 ****** 2025-09-19 00:52:39.128763 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128767 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128772 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128777 | orchestrator | 2025-09-19 00:52:39.128781 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.128786 | orchestrator | Friday 19 September 2025 00:48:09 +0000 (0:00:01.034) 0:06:26.671 ****** 2025-09-19 00:52:39.128791 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128795 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128800 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128805 | orchestrator | 2025-09-19 00:52:39.128809 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.128814 | orchestrator | Friday 19 September 2025 00:48:09 +0000 (0:00:00.309) 0:06:26.981 ****** 2025-09-19 00:52:39.128819 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128823 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128828 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128833 | orchestrator | 2025-09-19 00:52:39.128837 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.128842 | orchestrator | Friday 19 September 2025 00:48:09 +0000 (0:00:00.298) 0:06:27.279 ****** 2025-09-19 00:52:39.128847 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128851 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128856 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128861 | orchestrator | 2025-09-19 00:52:39.128876 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.128881 | orchestrator | Friday 19 September 2025 00:48:09 +0000 (0:00:00.307) 0:06:27.587 ****** 2025-09-19 00:52:39.128886 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128891 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128895 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128903 | orchestrator | 2025-09-19 00:52:39.128908 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.128916 | orchestrator | Friday 19 September 2025 00:48:10 +0000 (0:00:00.639) 0:06:28.226 ****** 2025-09-19 00:52:39.128921 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.128926 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.128930 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.128935 | orchestrator | 2025-09-19 00:52:39.128940 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.128945 | orchestrator | Friday 19 September 2025 00:48:10 +0000 (0:00:00.332) 0:06:28.558 ****** 2025-09-19 00:52:39.128951 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128956 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128961 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128966 | orchestrator | 2025-09-19 00:52:39.128971 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.128975 | orchestrator | Friday 19 September 2025 00:48:11 +0000 (0:00:00.323) 0:06:28.881 ****** 2025-09-19 00:52:39.128980 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.128985 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.128990 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.128994 | orchestrator | 2025-09-19 00:52:39.128999 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.129004 | orchestrator | Friday 19 September 2025 00:48:11 +0000 (0:00:00.287) 0:06:29.168 ****** 2025-09-19 00:52:39.129008 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129013 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129018 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129022 | orchestrator | 2025-09-19 00:52:39.129027 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.129032 | orchestrator | Friday 19 September 2025 00:48:12 +0000 (0:00:00.539) 0:06:29.708 ****** 2025-09-19 00:52:39.129036 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129041 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129046 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129051 | orchestrator | 2025-09-19 00:52:39.129055 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.129060 | orchestrator | Friday 19 September 2025 00:48:12 +0000 (0:00:00.329) 0:06:30.038 ****** 2025-09-19 00:52:39.129065 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129069 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129074 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129079 | orchestrator | 2025-09-19 00:52:39.129084 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-19 00:52:39.129088 | orchestrator | Friday 19 September 2025 00:48:12 +0000 (0:00:00.498) 0:06:30.536 ****** 2025-09-19 00:52:39.129093 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129098 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129102 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129107 | orchestrator | 2025-09-19 00:52:39.129112 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-19 00:52:39.129117 | orchestrator | Friday 19 September 2025 00:48:13 +0000 (0:00:00.589) 0:06:31.126 ****** 2025-09-19 00:52:39.129122 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 00:52:39.129126 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:52:39.129131 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:52:39.129136 | orchestrator | 2025-09-19 00:52:39.129140 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-19 00:52:39.129145 | orchestrator | Friday 19 September 2025 00:48:14 +0000 (0:00:00.630) 0:06:31.756 ****** 2025-09-19 00:52:39.129150 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4, testbed-node-3, testbed-node-5 2025-09-19 00:52:39.129155 | orchestrator | 2025-09-19 00:52:39.129163 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-19 00:52:39.129168 | orchestrator | Friday 19 September 2025 00:48:14 +0000 (0:00:00.551) 0:06:32.308 ****** 2025-09-19 00:52:39.129172 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129177 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129182 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129186 | orchestrator | 2025-09-19 00:52:39.129191 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-19 00:52:39.129196 | orchestrator | Friday 19 September 2025 00:48:15 +0000 (0:00:00.571) 0:06:32.879 ****** 2025-09-19 00:52:39.129200 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129205 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129210 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129215 | orchestrator | 2025-09-19 00:52:39.129219 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-19 00:52:39.129224 | orchestrator | Friday 19 September 2025 00:48:15 +0000 (0:00:00.345) 0:06:33.225 ****** 2025-09-19 00:52:39.129229 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129234 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129238 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129243 | orchestrator | 2025-09-19 00:52:39.129248 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-19 00:52:39.129253 | orchestrator | Friday 19 September 2025 00:48:16 +0000 (0:00:00.659) 0:06:33.884 ****** 2025-09-19 00:52:39.129257 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129262 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129267 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129271 | orchestrator | 2025-09-19 00:52:39.129276 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-19 00:52:39.129281 | orchestrator | Friday 19 September 2025 00:48:16 +0000 (0:00:00.323) 0:06:34.208 ****** 2025-09-19 00:52:39.129286 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 00:52:39.129291 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 00:52:39.129295 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 00:52:39.129300 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 00:52:39.129307 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 00:52:39.129312 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 00:52:39.129317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 00:52:39.129326 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 00:52:39.129331 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 00:52:39.129336 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 00:52:39.129340 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 00:52:39.129345 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 00:52:39.129350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 00:52:39.129354 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 00:52:39.129359 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 00:52:39.129364 | orchestrator | 2025-09-19 00:52:39.129369 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-19 00:52:39.129373 | orchestrator | Friday 19 September 2025 00:48:20 +0000 (0:00:03.601) 0:06:37.809 ****** 2025-09-19 00:52:39.129381 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129386 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129391 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129396 | orchestrator | 2025-09-19 00:52:39.129400 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-19 00:52:39.129405 | orchestrator | Friday 19 September 2025 00:48:20 +0000 (0:00:00.306) 0:06:38.116 ****** 2025-09-19 00:52:39.129410 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.129415 | orchestrator | 2025-09-19 00:52:39.129419 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-19 00:52:39.129424 | orchestrator | Friday 19 September 2025 00:48:20 +0000 (0:00:00.496) 0:06:38.613 ****** 2025-09-19 00:52:39.129429 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 00:52:39.129433 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 00:52:39.129438 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 00:52:39.129443 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-19 00:52:39.129448 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-19 00:52:39.129452 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-19 00:52:39.129457 | orchestrator | 2025-09-19 00:52:39.129462 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-19 00:52:39.129467 | orchestrator | Friday 19 September 2025 00:48:22 +0000 (0:00:01.241) 0:06:39.854 ****** 2025-09-19 00:52:39.129471 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.129476 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 00:52:39.129481 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:52:39.129486 | orchestrator | 2025-09-19 00:52:39.129490 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-19 00:52:39.129495 | orchestrator | Friday 19 September 2025 00:48:24 +0000 (0:00:02.293) 0:06:42.147 ****** 2025-09-19 00:52:39.129500 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 00:52:39.129505 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 00:52:39.129509 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.129514 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 00:52:39.129519 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 00:52:39.129523 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.129528 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 00:52:39.129533 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 00:52:39.129537 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.129542 | orchestrator | 2025-09-19 00:52:39.129547 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-19 00:52:39.129552 | orchestrator | Friday 19 September 2025 00:48:25 +0000 (0:00:01.129) 0:06:43.276 ****** 2025-09-19 00:52:39.129556 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.129561 | orchestrator | 2025-09-19 00:52:39.129566 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-19 00:52:39.129570 | orchestrator | Friday 19 September 2025 00:48:27 +0000 (0:00:02.038) 0:06:45.314 ****** 2025-09-19 00:52:39.129575 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.129580 | orchestrator | 2025-09-19 00:52:39.129585 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-19 00:52:39.129589 | orchestrator | Friday 19 September 2025 00:48:28 +0000 (0:00:00.561) 0:06:45.875 ****** 2025-09-19 00:52:39.129594 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9c5ae36c-b075-5e22-9b23-69e08de6e546', 'data_vg': 'ceph-9c5ae36c-b075-5e22-9b23-69e08de6e546'}) 2025-09-19 00:52:39.129603 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bc7aa585-dea2-57c4-a9fa-18818632dc3c', 'data_vg': 'ceph-bc7aa585-dea2-57c4-a9fa-18818632dc3c'}) 2025-09-19 00:52:39.129611 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7c9f8b51-166c-5055-bfcb-65abe80d3110', 'data_vg': 'ceph-7c9f8b51-166c-5055-bfcb-65abe80d3110'}) 2025-09-19 00:52:39.129618 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3271a5cd-b931-506b-9a72-a7bc6b6b65fd', 'data_vg': 'ceph-3271a5cd-b931-506b-9a72-a7bc6b6b65fd'}) 2025-09-19 00:52:39.129623 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ba978b90-a663-5d0c-8f05-4b4e8986f79e', 'data_vg': 'ceph-ba978b90-a663-5d0c-8f05-4b4e8986f79e'}) 2025-09-19 00:52:39.129628 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-25e4de26-ffd2-5ba5-a3e7-287c918a347b', 'data_vg': 'ceph-25e4de26-ffd2-5ba5-a3e7-287c918a347b'}) 2025-09-19 00:52:39.129633 | orchestrator | 2025-09-19 00:52:39.129638 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-19 00:52:39.129642 | orchestrator | Friday 19 September 2025 00:49:15 +0000 (0:00:46.987) 0:07:32.863 ****** 2025-09-19 00:52:39.129647 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129656 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129661 | orchestrator | 2025-09-19 00:52:39.129666 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-19 00:52:39.129670 | orchestrator | Friday 19 September 2025 00:49:15 +0000 (0:00:00.327) 0:07:33.190 ****** 2025-09-19 00:52:39.129675 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.129680 | orchestrator | 2025-09-19 00:52:39.129685 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-19 00:52:39.129690 | orchestrator | Friday 19 September 2025 00:49:16 +0000 (0:00:00.531) 0:07:33.721 ****** 2025-09-19 00:52:39.129694 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129699 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129704 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129709 | orchestrator | 2025-09-19 00:52:39.129713 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-19 00:52:39.129718 | orchestrator | Friday 19 September 2025 00:49:17 +0000 (0:00:00.955) 0:07:34.676 ****** 2025-09-19 00:52:39.129723 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.129728 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.129732 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.129737 | orchestrator | 2025-09-19 00:52:39.129742 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-19 00:52:39.129747 | orchestrator | Friday 19 September 2025 00:49:19 +0000 (0:00:02.614) 0:07:37.291 ****** 2025-09-19 00:52:39.129751 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.129756 | orchestrator | 2025-09-19 00:52:39.129761 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-19 00:52:39.129766 | orchestrator | Friday 19 September 2025 00:49:20 +0000 (0:00:00.501) 0:07:37.792 ****** 2025-09-19 00:52:39.129770 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.129775 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.129780 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.129785 | orchestrator | 2025-09-19 00:52:39.129789 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-19 00:52:39.129794 | orchestrator | Friday 19 September 2025 00:49:21 +0000 (0:00:01.474) 0:07:39.267 ****** 2025-09-19 00:52:39.129799 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.129804 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.129808 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.129813 | orchestrator | 2025-09-19 00:52:39.129818 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-19 00:52:39.129828 | orchestrator | Friday 19 September 2025 00:49:22 +0000 (0:00:01.215) 0:07:40.483 ****** 2025-09-19 00:52:39.129833 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.129838 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.129843 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.129847 | orchestrator | 2025-09-19 00:52:39.129852 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-19 00:52:39.129857 | orchestrator | Friday 19 September 2025 00:49:24 +0000 (0:00:01.775) 0:07:42.258 ****** 2025-09-19 00:52:39.129861 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129875 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129880 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129884 | orchestrator | 2025-09-19 00:52:39.129889 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-19 00:52:39.129894 | orchestrator | Friday 19 September 2025 00:49:24 +0000 (0:00:00.327) 0:07:42.586 ****** 2025-09-19 00:52:39.129899 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.129903 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.129908 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.129913 | orchestrator | 2025-09-19 00:52:39.129917 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-19 00:52:39.129922 | orchestrator | Friday 19 September 2025 00:49:25 +0000 (0:00:00.557) 0:07:43.143 ****** 2025-09-19 00:52:39.129927 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-19 00:52:39.129932 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-19 00:52:39.129936 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-19 00:52:39.129941 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-19 00:52:39.129946 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-19 00:52:39.129950 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 00:52:39.129955 | orchestrator | 2025-09-19 00:52:39.129960 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-19 00:52:39.129964 | orchestrator | Friday 19 September 2025 00:49:26 +0000 (0:00:01.100) 0:07:44.244 ****** 2025-09-19 00:52:39.129969 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-19 00:52:39.129974 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-19 00:52:39.129979 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-19 00:52:39.129986 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-19 00:52:39.129991 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 00:52:39.129995 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-19 00:52:39.130000 | orchestrator | 2025-09-19 00:52:39.130005 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-19 00:52:39.130012 | orchestrator | Friday 19 September 2025 00:49:28 +0000 (0:00:02.105) 0:07:46.349 ****** 2025-09-19 00:52:39.130031 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-19 00:52:39.130036 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-19 00:52:39.130040 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-19 00:52:39.130045 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-19 00:52:39.130050 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 00:52:39.130054 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-19 00:52:39.130059 | orchestrator | 2025-09-19 00:52:39.130064 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-19 00:52:39.130069 | orchestrator | Friday 19 September 2025 00:49:33 +0000 (0:00:04.338) 0:07:50.688 ****** 2025-09-19 00:52:39.130073 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130078 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.130088 | orchestrator | 2025-09-19 00:52:39.130092 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-19 00:52:39.130097 | orchestrator | Friday 19 September 2025 00:49:35 +0000 (0:00:02.723) 0:07:53.411 ****** 2025-09-19 00:52:39.130102 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130110 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130115 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-19 00:52:39.130120 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.130124 | orchestrator | 2025-09-19 00:52:39.130129 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-19 00:52:39.130134 | orchestrator | Friday 19 September 2025 00:49:48 +0000 (0:00:12.552) 0:08:05.964 ****** 2025-09-19 00:52:39.130139 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130143 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130148 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130153 | orchestrator | 2025-09-19 00:52:39.130157 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 00:52:39.130162 | orchestrator | Friday 19 September 2025 00:49:49 +0000 (0:00:01.049) 0:08:07.014 ****** 2025-09-19 00:52:39.130167 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130172 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130176 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130181 | orchestrator | 2025-09-19 00:52:39.130186 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 00:52:39.130191 | orchestrator | Friday 19 September 2025 00:49:49 +0000 (0:00:00.356) 0:08:07.370 ****** 2025-09-19 00:52:39.130195 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.130200 | orchestrator | 2025-09-19 00:52:39.130205 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 00:52:39.130209 | orchestrator | Friday 19 September 2025 00:49:50 +0000 (0:00:00.525) 0:08:07.896 ****** 2025-09-19 00:52:39.130214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.130219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.130224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.130228 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130233 | orchestrator | 2025-09-19 00:52:39.130238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 00:52:39.130242 | orchestrator | Friday 19 September 2025 00:49:50 +0000 (0:00:00.688) 0:08:08.585 ****** 2025-09-19 00:52:39.130247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130252 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130257 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130261 | orchestrator | 2025-09-19 00:52:39.130266 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 00:52:39.130271 | orchestrator | Friday 19 September 2025 00:49:51 +0000 (0:00:00.614) 0:08:09.199 ****** 2025-09-19 00:52:39.130275 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130280 | orchestrator | 2025-09-19 00:52:39.130285 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 00:52:39.130290 | orchestrator | Friday 19 September 2025 00:49:51 +0000 (0:00:00.226) 0:08:09.426 ****** 2025-09-19 00:52:39.130294 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130299 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130304 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130308 | orchestrator | 2025-09-19 00:52:39.130313 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 00:52:39.130318 | orchestrator | Friday 19 September 2025 00:49:52 +0000 (0:00:00.315) 0:08:09.741 ****** 2025-09-19 00:52:39.130323 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130327 | orchestrator | 2025-09-19 00:52:39.130332 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 00:52:39.130337 | orchestrator | Friday 19 September 2025 00:49:52 +0000 (0:00:00.212) 0:08:09.954 ****** 2025-09-19 00:52:39.130342 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130350 | orchestrator | 2025-09-19 00:52:39.130355 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 00:52:39.130360 | orchestrator | Friday 19 September 2025 00:49:52 +0000 (0:00:00.291) 0:08:10.245 ****** 2025-09-19 00:52:39.130365 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130370 | orchestrator | 2025-09-19 00:52:39.130374 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 00:52:39.130379 | orchestrator | Friday 19 September 2025 00:49:52 +0000 (0:00:00.114) 0:08:10.360 ****** 2025-09-19 00:52:39.130387 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130392 | orchestrator | 2025-09-19 00:52:39.130396 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 00:52:39.130401 | orchestrator | Friday 19 September 2025 00:49:52 +0000 (0:00:00.239) 0:08:10.600 ****** 2025-09-19 00:52:39.130406 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130411 | orchestrator | 2025-09-19 00:52:39.130418 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 00:52:39.130423 | orchestrator | Friday 19 September 2025 00:49:53 +0000 (0:00:00.204) 0:08:10.805 ****** 2025-09-19 00:52:39.130428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.130432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.130437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.130442 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130446 | orchestrator | 2025-09-19 00:52:39.130451 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 00:52:39.130456 | orchestrator | Friday 19 September 2025 00:49:54 +0000 (0:00:00.950) 0:08:11.755 ****** 2025-09-19 00:52:39.130461 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130465 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130470 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130475 | orchestrator | 2025-09-19 00:52:39.130479 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 00:52:39.130484 | orchestrator | Friday 19 September 2025 00:49:54 +0000 (0:00:00.338) 0:08:12.093 ****** 2025-09-19 00:52:39.130489 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130493 | orchestrator | 2025-09-19 00:52:39.130498 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 00:52:39.130503 | orchestrator | Friday 19 September 2025 00:49:54 +0000 (0:00:00.218) 0:08:12.312 ****** 2025-09-19 00:52:39.130508 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130512 | orchestrator | 2025-09-19 00:52:39.130517 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-19 00:52:39.130522 | orchestrator | 2025-09-19 00:52:39.130527 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.130532 | orchestrator | Friday 19 September 2025 00:49:55 +0000 (0:00:00.685) 0:08:12.997 ****** 2025-09-19 00:52:39.130536 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.130542 | orchestrator | 2025-09-19 00:52:39.130546 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.130551 | orchestrator | Friday 19 September 2025 00:49:56 +0000 (0:00:01.213) 0:08:14.211 ****** 2025-09-19 00:52:39.130556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.130561 | orchestrator | 2025-09-19 00:52:39.130566 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.130570 | orchestrator | Friday 19 September 2025 00:49:57 +0000 (0:00:01.188) 0:08:15.399 ****** 2025-09-19 00:52:39.130575 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.130580 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130584 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130594 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.130599 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130603 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.130608 | orchestrator | 2025-09-19 00:52:39.130613 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.130618 | orchestrator | Friday 19 September 2025 00:49:58 +0000 (0:00:01.137) 0:08:16.537 ****** 2025-09-19 00:52:39.130622 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.130627 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.130632 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.130636 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.130641 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.130646 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.130651 | orchestrator | 2025-09-19 00:52:39.130655 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.130660 | orchestrator | Friday 19 September 2025 00:49:59 +0000 (0:00:01.040) 0:08:17.577 ****** 2025-09-19 00:52:39.130665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.130670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.130674 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.130679 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.130684 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.130688 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.130693 | orchestrator | 2025-09-19 00:52:39.130698 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.130703 | orchestrator | Friday 19 September 2025 00:50:01 +0000 (0:00:01.307) 0:08:18.885 ****** 2025-09-19 00:52:39.130707 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.130712 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.130717 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.130721 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.130726 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.130731 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.130736 | orchestrator | 2025-09-19 00:52:39.130740 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.130745 | orchestrator | Friday 19 September 2025 00:50:02 +0000 (0:00:01.012) 0:08:19.897 ****** 2025-09-19 00:52:39.130750 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.130755 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130759 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130764 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.130769 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130773 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.130778 | orchestrator | 2025-09-19 00:52:39.130783 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.130788 | orchestrator | Friday 19 September 2025 00:50:03 +0000 (0:00:00.997) 0:08:20.895 ****** 2025-09-19 00:52:39.130795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.130800 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.130804 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.130809 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130814 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130819 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130823 | orchestrator | 2025-09-19 00:52:39.130830 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.130835 | orchestrator | Friday 19 September 2025 00:50:03 +0000 (0:00:00.635) 0:08:21.530 ****** 2025-09-19 00:52:39.130840 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.130844 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.130849 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.130854 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.130858 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.130890 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.130896 | orchestrator | 2025-09-19 00:52:39.130905 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.130910 | orchestrator | Friday 19 September 2025 00:50:04 +0000 (0:00:00.870) 0:08:22.401 ****** 2025-09-19 00:52:39.130915 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.130919 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.130924 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.130929 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.130934 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.130938 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.130943 | orchestrator | 2025-09-19 00:52:39.130948 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.130953 | orchestrator | Friday 19 September 2025 00:50:05 +0000 (0:00:01.069) 0:08:23.470 ****** 2025-09-19 00:52:39.130957 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.130962 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.130967 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.130972 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.130976 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.130981 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.130986 | orchestrator | 2025-09-19 00:52:39.130990 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.130995 | orchestrator | Friday 19 September 2025 00:50:07 +0000 (0:00:01.344) 0:08:24.815 ****** 2025-09-19 00:52:39.131000 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.131005 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.131009 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.131014 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131019 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131024 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131028 | orchestrator | 2025-09-19 00:52:39.131033 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.131038 | orchestrator | Friday 19 September 2025 00:50:07 +0000 (0:00:00.554) 0:08:25.370 ****** 2025-09-19 00:52:39.131042 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131047 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.131052 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.131056 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131061 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131066 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131071 | orchestrator | 2025-09-19 00:52:39.131075 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.131080 | orchestrator | Friday 19 September 2025 00:50:08 +0000 (0:00:00.914) 0:08:26.285 ****** 2025-09-19 00:52:39.131085 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.131090 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.131094 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.131099 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131104 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131109 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131113 | orchestrator | 2025-09-19 00:52:39.131118 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.131123 | orchestrator | Friday 19 September 2025 00:50:09 +0000 (0:00:00.675) 0:08:26.960 ****** 2025-09-19 00:52:39.131128 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.131132 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.131137 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.131142 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131147 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131151 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131156 | orchestrator | 2025-09-19 00:52:39.131161 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.131166 | orchestrator | Friday 19 September 2025 00:50:10 +0000 (0:00:00.855) 0:08:27.816 ****** 2025-09-19 00:52:39.131170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.131175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.131183 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.131188 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131193 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131197 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131202 | orchestrator | 2025-09-19 00:52:39.131207 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.131212 | orchestrator | Friday 19 September 2025 00:50:10 +0000 (0:00:00.632) 0:08:28.449 ****** 2025-09-19 00:52:39.131217 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.131221 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.131226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.131231 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131235 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131240 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131245 | orchestrator | 2025-09-19 00:52:39.131249 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.131254 | orchestrator | Friday 19 September 2025 00:50:11 +0000 (0:00:00.860) 0:08:29.309 ****** 2025-09-19 00:52:39.131259 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:52:39.131263 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:52:39.131268 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:52:39.131273 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131277 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131282 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131287 | orchestrator | 2025-09-19 00:52:39.131291 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.131299 | orchestrator | Friday 19 September 2025 00:50:12 +0000 (0:00:00.584) 0:08:29.893 ****** 2025-09-19 00:52:39.131304 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131308 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.131313 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.131318 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131323 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131327 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131332 | orchestrator | 2025-09-19 00:52:39.131339 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.131345 | orchestrator | Friday 19 September 2025 00:50:13 +0000 (0:00:00.805) 0:08:30.699 ****** 2025-09-19 00:52:39.131349 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131354 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.131358 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.131363 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131367 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131372 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131376 | orchestrator | 2025-09-19 00:52:39.131381 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.131385 | orchestrator | Friday 19 September 2025 00:50:13 +0000 (0:00:00.596) 0:08:31.296 ****** 2025-09-19 00:52:39.131390 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131394 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.131399 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.131403 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131407 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131412 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131416 | orchestrator | 2025-09-19 00:52:39.131421 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-19 00:52:39.131425 | orchestrator | Friday 19 September 2025 00:50:14 +0000 (0:00:01.277) 0:08:32.574 ****** 2025-09-19 00:52:39.131430 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.131434 | orchestrator | 2025-09-19 00:52:39.131439 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-19 00:52:39.131443 | orchestrator | Friday 19 September 2025 00:50:19 +0000 (0:00:04.118) 0:08:36.693 ****** 2025-09-19 00:52:39.131448 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131456 | orchestrator | 2025-09-19 00:52:39.131460 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-19 00:52:39.131465 | orchestrator | Friday 19 September 2025 00:50:21 +0000 (0:00:02.616) 0:08:39.310 ****** 2025-09-19 00:52:39.131469 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131474 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.131478 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.131483 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.131487 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.131492 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.131496 | orchestrator | 2025-09-19 00:52:39.131500 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-19 00:52:39.131505 | orchestrator | Friday 19 September 2025 00:50:23 +0000 (0:00:01.478) 0:08:40.788 ****** 2025-09-19 00:52:39.131510 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.131514 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.131519 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.131523 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.131527 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.131532 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.131536 | orchestrator | 2025-09-19 00:52:39.131541 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-19 00:52:39.131545 | orchestrator | Friday 19 September 2025 00:50:24 +0000 (0:00:00.957) 0:08:41.745 ****** 2025-09-19 00:52:39.131550 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.131555 | orchestrator | 2025-09-19 00:52:39.131559 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-19 00:52:39.131564 | orchestrator | Friday 19 September 2025 00:50:25 +0000 (0:00:01.403) 0:08:43.149 ****** 2025-09-19 00:52:39.131568 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.131573 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.131577 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.131581 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.131586 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.131590 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.131595 | orchestrator | 2025-09-19 00:52:39.131599 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-19 00:52:39.131604 | orchestrator | Friday 19 September 2025 00:50:27 +0000 (0:00:01.887) 0:08:45.036 ****** 2025-09-19 00:52:39.131609 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.131613 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.131617 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.131622 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.131626 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.131630 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.131635 | orchestrator | 2025-09-19 00:52:39.131639 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-19 00:52:39.131644 | orchestrator | Friday 19 September 2025 00:50:30 +0000 (0:00:03.180) 0:08:48.217 ****** 2025-09-19 00:52:39.131649 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.131653 | orchestrator | 2025-09-19 00:52:39.131658 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-19 00:52:39.131663 | orchestrator | Friday 19 September 2025 00:50:31 +0000 (0:00:01.301) 0:08:49.519 ****** 2025-09-19 00:52:39.131667 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131672 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.131676 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.131681 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131685 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131689 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131697 | orchestrator | 2025-09-19 00:52:39.131701 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-19 00:52:39.131706 | orchestrator | Friday 19 September 2025 00:50:32 +0000 (0:00:00.815) 0:08:50.334 ****** 2025-09-19 00:52:39.131713 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:52:39.131718 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:52:39.131722 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.131727 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:52:39.131731 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.131736 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.131740 | orchestrator | 2025-09-19 00:52:39.131745 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-19 00:52:39.131752 | orchestrator | Friday 19 September 2025 00:50:34 +0000 (0:00:02.160) 0:08:52.495 ****** 2025-09-19 00:52:39.131756 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:52:39.131761 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:52:39.131765 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:52:39.131770 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131774 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131779 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131783 | orchestrator | 2025-09-19 00:52:39.131788 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-19 00:52:39.131792 | orchestrator | 2025-09-19 00:52:39.131797 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.131801 | orchestrator | Friday 19 September 2025 00:50:36 +0000 (0:00:01.431) 0:08:53.926 ****** 2025-09-19 00:52:39.131806 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.131810 | orchestrator | 2025-09-19 00:52:39.131815 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.131819 | orchestrator | Friday 19 September 2025 00:50:37 +0000 (0:00:00.744) 0:08:54.671 ****** 2025-09-19 00:52:39.131824 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.131828 | orchestrator | 2025-09-19 00:52:39.131833 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.131837 | orchestrator | Friday 19 September 2025 00:50:37 +0000 (0:00:00.517) 0:08:55.188 ****** 2025-09-19 00:52:39.131842 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131847 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131851 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131856 | orchestrator | 2025-09-19 00:52:39.131860 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.131876 | orchestrator | Friday 19 September 2025 00:50:37 +0000 (0:00:00.301) 0:08:55.489 ****** 2025-09-19 00:52:39.131881 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131885 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131890 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131894 | orchestrator | 2025-09-19 00:52:39.131899 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.131903 | orchestrator | Friday 19 September 2025 00:50:38 +0000 (0:00:00.998) 0:08:56.487 ****** 2025-09-19 00:52:39.131908 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131912 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131917 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131921 | orchestrator | 2025-09-19 00:52:39.131926 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.131930 | orchestrator | Friday 19 September 2025 00:50:39 +0000 (0:00:00.780) 0:08:57.268 ****** 2025-09-19 00:52:39.131935 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.131939 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.131944 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.131948 | orchestrator | 2025-09-19 00:52:39.131953 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.131960 | orchestrator | Friday 19 September 2025 00:50:40 +0000 (0:00:00.806) 0:08:58.075 ****** 2025-09-19 00:52:39.131965 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131969 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.131974 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.131978 | orchestrator | 2025-09-19 00:52:39.131983 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.131987 | orchestrator | Friday 19 September 2025 00:50:40 +0000 (0:00:00.337) 0:08:58.413 ****** 2025-09-19 00:52:39.131992 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.131997 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132001 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132006 | orchestrator | 2025-09-19 00:52:39.132010 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.132015 | orchestrator | Friday 19 September 2025 00:50:41 +0000 (0:00:00.612) 0:08:59.026 ****** 2025-09-19 00:52:39.132019 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132024 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132028 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132032 | orchestrator | 2025-09-19 00:52:39.132037 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.132042 | orchestrator | Friday 19 September 2025 00:50:41 +0000 (0:00:00.333) 0:08:59.360 ****** 2025-09-19 00:52:39.132046 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132050 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132055 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132059 | orchestrator | 2025-09-19 00:52:39.132064 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.132068 | orchestrator | Friday 19 September 2025 00:50:42 +0000 (0:00:00.813) 0:09:00.173 ****** 2025-09-19 00:52:39.132073 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132077 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132082 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132086 | orchestrator | 2025-09-19 00:52:39.132091 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.132095 | orchestrator | Friday 19 September 2025 00:50:43 +0000 (0:00:00.737) 0:09:00.910 ****** 2025-09-19 00:52:39.132100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132104 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132109 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132113 | orchestrator | 2025-09-19 00:52:39.132118 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.132122 | orchestrator | Friday 19 September 2025 00:50:43 +0000 (0:00:00.441) 0:09:01.352 ****** 2025-09-19 00:52:39.132130 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132134 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132139 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132143 | orchestrator | 2025-09-19 00:52:39.132148 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.132152 | orchestrator | Friday 19 September 2025 00:50:43 +0000 (0:00:00.252) 0:09:01.604 ****** 2025-09-19 00:52:39.132159 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132164 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132168 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132173 | orchestrator | 2025-09-19 00:52:39.132177 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.132182 | orchestrator | Friday 19 September 2025 00:50:44 +0000 (0:00:00.315) 0:09:01.920 ****** 2025-09-19 00:52:39.132187 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132191 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132196 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132200 | orchestrator | 2025-09-19 00:52:39.132205 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.132209 | orchestrator | Friday 19 September 2025 00:50:44 +0000 (0:00:00.302) 0:09:02.222 ****** 2025-09-19 00:52:39.132219 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132223 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132228 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132232 | orchestrator | 2025-09-19 00:52:39.132237 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.132241 | orchestrator | Friday 19 September 2025 00:50:45 +0000 (0:00:00.458) 0:09:02.680 ****** 2025-09-19 00:52:39.132246 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132250 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132255 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132259 | orchestrator | 2025-09-19 00:52:39.132264 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.132268 | orchestrator | Friday 19 September 2025 00:50:45 +0000 (0:00:00.279) 0:09:02.959 ****** 2025-09-19 00:52:39.132273 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132277 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132282 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132286 | orchestrator | 2025-09-19 00:52:39.132291 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.132295 | orchestrator | Friday 19 September 2025 00:50:45 +0000 (0:00:00.285) 0:09:03.245 ****** 2025-09-19 00:52:39.132300 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132304 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132309 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132313 | orchestrator | 2025-09-19 00:52:39.132318 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.132322 | orchestrator | Friday 19 September 2025 00:50:45 +0000 (0:00:00.242) 0:09:03.487 ****** 2025-09-19 00:52:39.132327 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132331 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132336 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132340 | orchestrator | 2025-09-19 00:52:39.132345 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.132349 | orchestrator | Friday 19 September 2025 00:50:46 +0000 (0:00:00.510) 0:09:03.998 ****** 2025-09-19 00:52:39.132354 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132358 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132363 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132367 | orchestrator | 2025-09-19 00:52:39.132372 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-19 00:52:39.132376 | orchestrator | Friday 19 September 2025 00:50:46 +0000 (0:00:00.470) 0:09:04.469 ****** 2025-09-19 00:52:39.132381 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132385 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132390 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-19 00:52:39.132394 | orchestrator | 2025-09-19 00:52:39.132399 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-19 00:52:39.132403 | orchestrator | Friday 19 September 2025 00:50:47 +0000 (0:00:00.500) 0:09:04.969 ****** 2025-09-19 00:52:39.132408 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.132412 | orchestrator | 2025-09-19 00:52:39.132417 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-19 00:52:39.132421 | orchestrator | Friday 19 September 2025 00:50:49 +0000 (0:00:02.174) 0:09:07.143 ****** 2025-09-19 00:52:39.132426 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-19 00:52:39.132432 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132437 | orchestrator | 2025-09-19 00:52:39.132441 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-19 00:52:39.132446 | orchestrator | Friday 19 September 2025 00:50:49 +0000 (0:00:00.239) 0:09:07.383 ****** 2025-09-19 00:52:39.132454 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:52:39.132463 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:52:39.132468 | orchestrator | 2025-09-19 00:52:39.132472 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-19 00:52:39.132479 | orchestrator | Friday 19 September 2025 00:50:58 +0000 (0:00:08.605) 0:09:15.988 ****** 2025-09-19 00:52:39.132484 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 00:52:39.132489 | orchestrator | 2025-09-19 00:52:39.132493 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-19 00:52:39.132498 | orchestrator | Friday 19 September 2025 00:51:02 +0000 (0:00:03.659) 0:09:19.647 ****** 2025-09-19 00:52:39.132504 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.132509 | orchestrator | 2025-09-19 00:52:39.132514 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-19 00:52:39.132518 | orchestrator | Friday 19 September 2025 00:51:02 +0000 (0:00:00.788) 0:09:20.436 ****** 2025-09-19 00:52:39.132523 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 00:52:39.132527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-19 00:52:39.132532 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 00:52:39.132536 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 00:52:39.132541 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-19 00:52:39.132545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-19 00:52:39.132550 | orchestrator | 2025-09-19 00:52:39.132554 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-19 00:52:39.132559 | orchestrator | Friday 19 September 2025 00:51:04 +0000 (0:00:01.644) 0:09:22.080 ****** 2025-09-19 00:52:39.132563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.132568 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 00:52:39.132572 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:52:39.132577 | orchestrator | 2025-09-19 00:52:39.132581 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-19 00:52:39.132586 | orchestrator | Friday 19 September 2025 00:51:06 +0000 (0:00:02.230) 0:09:24.311 ****** 2025-09-19 00:52:39.132590 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 00:52:39.132595 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 00:52:39.132599 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132604 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 00:52:39.132608 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 00:52:39.132613 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132617 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 00:52:39.132622 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 00:52:39.132626 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132631 | orchestrator | 2025-09-19 00:52:39.132635 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-19 00:52:39.132639 | orchestrator | Friday 19 September 2025 00:51:07 +0000 (0:00:01.182) 0:09:25.493 ****** 2025-09-19 00:52:39.132644 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132649 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132666 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132671 | orchestrator | 2025-09-19 00:52:39.132675 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-19 00:52:39.132680 | orchestrator | Friday 19 September 2025 00:51:10 +0000 (0:00:02.721) 0:09:28.215 ****** 2025-09-19 00:52:39.132684 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.132689 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.132693 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.132698 | orchestrator | 2025-09-19 00:52:39.132702 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-19 00:52:39.132707 | orchestrator | Friday 19 September 2025 00:51:10 +0000 (0:00:00.331) 0:09:28.546 ****** 2025-09-19 00:52:39.132711 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.132716 | orchestrator | 2025-09-19 00:52:39.132720 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-19 00:52:39.132725 | orchestrator | Friday 19 September 2025 00:51:11 +0000 (0:00:00.791) 0:09:29.338 ****** 2025-09-19 00:52:39.132729 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.132734 | orchestrator | 2025-09-19 00:52:39.132738 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-19 00:52:39.132743 | orchestrator | Friday 19 September 2025 00:51:12 +0000 (0:00:00.525) 0:09:29.864 ****** 2025-09-19 00:52:39.132747 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132752 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132756 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132761 | orchestrator | 2025-09-19 00:52:39.132765 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-19 00:52:39.132770 | orchestrator | Friday 19 September 2025 00:51:13 +0000 (0:00:01.552) 0:09:31.417 ****** 2025-09-19 00:52:39.132774 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132779 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132783 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132788 | orchestrator | 2025-09-19 00:52:39.132792 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-19 00:52:39.132797 | orchestrator | Friday 19 September 2025 00:51:14 +0000 (0:00:01.168) 0:09:32.585 ****** 2025-09-19 00:52:39.132801 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132806 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132810 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132815 | orchestrator | 2025-09-19 00:52:39.132819 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-19 00:52:39.132824 | orchestrator | Friday 19 September 2025 00:51:16 +0000 (0:00:01.791) 0:09:34.376 ****** 2025-09-19 00:52:39.132828 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132835 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132840 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132844 | orchestrator | 2025-09-19 00:52:39.132849 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-19 00:52:39.132853 | orchestrator | Friday 19 September 2025 00:51:18 +0000 (0:00:01.931) 0:09:36.307 ****** 2025-09-19 00:52:39.132858 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132890 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132896 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132901 | orchestrator | 2025-09-19 00:52:39.132905 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 00:52:39.132910 | orchestrator | Friday 19 September 2025 00:51:20 +0000 (0:00:01.531) 0:09:37.839 ****** 2025-09-19 00:52:39.132914 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132919 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.132923 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.132928 | orchestrator | 2025-09-19 00:52:39.132932 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 00:52:39.132941 | orchestrator | Friday 19 September 2025 00:51:20 +0000 (0:00:00.667) 0:09:38.507 ****** 2025-09-19 00:52:39.132946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.132950 | orchestrator | 2025-09-19 00:52:39.132955 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 00:52:39.132959 | orchestrator | Friday 19 September 2025 00:51:21 +0000 (0:00:00.823) 0:09:39.330 ****** 2025-09-19 00:52:39.132964 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.132968 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.132973 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.132977 | orchestrator | 2025-09-19 00:52:39.132982 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 00:52:39.132986 | orchestrator | Friday 19 September 2025 00:51:22 +0000 (0:00:00.321) 0:09:39.652 ****** 2025-09-19 00:52:39.132991 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.132995 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.133000 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.133004 | orchestrator | 2025-09-19 00:52:39.133009 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 00:52:39.133013 | orchestrator | Friday 19 September 2025 00:51:23 +0000 (0:00:01.256) 0:09:40.909 ****** 2025-09-19 00:52:39.133018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.133022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.133027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.133031 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133036 | orchestrator | 2025-09-19 00:52:39.133040 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 00:52:39.133045 | orchestrator | Friday 19 September 2025 00:51:24 +0000 (0:00:01.111) 0:09:42.020 ****** 2025-09-19 00:52:39.133049 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133054 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133058 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133062 | orchestrator | 2025-09-19 00:52:39.133067 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 00:52:39.133072 | orchestrator | 2025-09-19 00:52:39.133076 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 00:52:39.133081 | orchestrator | Friday 19 September 2025 00:51:24 +0000 (0:00:00.569) 0:09:42.590 ****** 2025-09-19 00:52:39.133085 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.133090 | orchestrator | 2025-09-19 00:52:39.133094 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 00:52:39.133099 | orchestrator | Friday 19 September 2025 00:51:25 +0000 (0:00:00.898) 0:09:43.489 ****** 2025-09-19 00:52:39.133103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.133108 | orchestrator | 2025-09-19 00:52:39.133113 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 00:52:39.133117 | orchestrator | Friday 19 September 2025 00:51:26 +0000 (0:00:00.546) 0:09:44.035 ****** 2025-09-19 00:52:39.133122 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133126 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133131 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133135 | orchestrator | 2025-09-19 00:52:39.133139 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 00:52:39.133144 | orchestrator | Friday 19 September 2025 00:51:26 +0000 (0:00:00.314) 0:09:44.349 ****** 2025-09-19 00:52:39.133149 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133153 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133158 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133162 | orchestrator | 2025-09-19 00:52:39.133170 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 00:52:39.133175 | orchestrator | Friday 19 September 2025 00:51:27 +0000 (0:00:01.036) 0:09:45.386 ****** 2025-09-19 00:52:39.133179 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133184 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133188 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133193 | orchestrator | 2025-09-19 00:52:39.133197 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 00:52:39.133202 | orchestrator | Friday 19 September 2025 00:51:28 +0000 (0:00:00.726) 0:09:46.112 ****** 2025-09-19 00:52:39.133206 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133211 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133215 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133220 | orchestrator | 2025-09-19 00:52:39.133224 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 00:52:39.133229 | orchestrator | Friday 19 September 2025 00:51:29 +0000 (0:00:00.748) 0:09:46.861 ****** 2025-09-19 00:52:39.133233 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133238 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133242 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133247 | orchestrator | 2025-09-19 00:52:39.133255 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 00:52:39.133259 | orchestrator | Friday 19 September 2025 00:51:29 +0000 (0:00:00.326) 0:09:47.188 ****** 2025-09-19 00:52:39.133264 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133268 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133273 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133277 | orchestrator | 2025-09-19 00:52:39.133284 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 00:52:39.133289 | orchestrator | Friday 19 September 2025 00:51:30 +0000 (0:00:00.587) 0:09:47.775 ****** 2025-09-19 00:52:39.133293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133298 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133302 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133307 | orchestrator | 2025-09-19 00:52:39.133311 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 00:52:39.133316 | orchestrator | Friday 19 September 2025 00:51:30 +0000 (0:00:00.328) 0:09:48.103 ****** 2025-09-19 00:52:39.133320 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133325 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133329 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133334 | orchestrator | 2025-09-19 00:52:39.133338 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 00:52:39.133343 | orchestrator | Friday 19 September 2025 00:51:31 +0000 (0:00:00.732) 0:09:48.836 ****** 2025-09-19 00:52:39.133347 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133352 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133356 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133361 | orchestrator | 2025-09-19 00:52:39.133365 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 00:52:39.133370 | orchestrator | Friday 19 September 2025 00:51:31 +0000 (0:00:00.709) 0:09:49.545 ****** 2025-09-19 00:52:39.133374 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133379 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133383 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133388 | orchestrator | 2025-09-19 00:52:39.133392 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 00:52:39.133397 | orchestrator | Friday 19 September 2025 00:51:32 +0000 (0:00:00.564) 0:09:50.110 ****** 2025-09-19 00:52:39.133401 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133406 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133410 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133415 | orchestrator | 2025-09-19 00:52:39.133419 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 00:52:39.133427 | orchestrator | Friday 19 September 2025 00:51:32 +0000 (0:00:00.305) 0:09:50.416 ****** 2025-09-19 00:52:39.133431 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133435 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133439 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133443 | orchestrator | 2025-09-19 00:52:39.133447 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 00:52:39.133451 | orchestrator | Friday 19 September 2025 00:51:33 +0000 (0:00:00.327) 0:09:50.743 ****** 2025-09-19 00:52:39.133455 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133460 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133464 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133468 | orchestrator | 2025-09-19 00:52:39.133472 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 00:52:39.133476 | orchestrator | Friday 19 September 2025 00:51:33 +0000 (0:00:00.325) 0:09:51.068 ****** 2025-09-19 00:52:39.133480 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133484 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133488 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133492 | orchestrator | 2025-09-19 00:52:39.133496 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 00:52:39.133500 | orchestrator | Friday 19 September 2025 00:51:34 +0000 (0:00:00.614) 0:09:51.683 ****** 2025-09-19 00:52:39.133504 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133508 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133512 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133516 | orchestrator | 2025-09-19 00:52:39.133520 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 00:52:39.133525 | orchestrator | Friday 19 September 2025 00:51:34 +0000 (0:00:00.353) 0:09:52.036 ****** 2025-09-19 00:52:39.133529 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133533 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133537 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133541 | orchestrator | 2025-09-19 00:52:39.133545 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 00:52:39.133549 | orchestrator | Friday 19 September 2025 00:51:34 +0000 (0:00:00.338) 0:09:52.375 ****** 2025-09-19 00:52:39.133553 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133557 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133561 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133565 | orchestrator | 2025-09-19 00:52:39.133569 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 00:52:39.133573 | orchestrator | Friday 19 September 2025 00:51:35 +0000 (0:00:00.325) 0:09:52.701 ****** 2025-09-19 00:52:39.133577 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133581 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133585 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133590 | orchestrator | 2025-09-19 00:52:39.133594 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 00:52:39.133598 | orchestrator | Friday 19 September 2025 00:51:35 +0000 (0:00:00.622) 0:09:53.323 ****** 2025-09-19 00:52:39.133602 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.133606 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.133610 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.133614 | orchestrator | 2025-09-19 00:52:39.133618 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-19 00:52:39.133622 | orchestrator | Friday 19 September 2025 00:51:36 +0000 (0:00:00.558) 0:09:53.882 ****** 2025-09-19 00:52:39.133626 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.133630 | orchestrator | 2025-09-19 00:52:39.133634 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 00:52:39.133641 | orchestrator | Friday 19 September 2025 00:51:36 +0000 (0:00:00.513) 0:09:54.395 ****** 2025-09-19 00:52:39.133645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133653 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 00:52:39.133657 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:52:39.133661 | orchestrator | 2025-09-19 00:52:39.133667 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 00:52:39.133671 | orchestrator | Friday 19 September 2025 00:51:39 +0000 (0:00:02.687) 0:09:57.083 ****** 2025-09-19 00:52:39.133675 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 00:52:39.133679 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 00:52:39.133683 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.133687 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 00:52:39.133691 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 00:52:39.133695 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.133699 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 00:52:39.133703 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 00:52:39.133708 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.133712 | orchestrator | 2025-09-19 00:52:39.133716 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-19 00:52:39.133720 | orchestrator | Friday 19 September 2025 00:51:40 +0000 (0:00:01.173) 0:09:58.256 ****** 2025-09-19 00:52:39.133724 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133728 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.133732 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.133736 | orchestrator | 2025-09-19 00:52:39.133740 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-19 00:52:39.133744 | orchestrator | Friday 19 September 2025 00:51:40 +0000 (0:00:00.308) 0:09:58.565 ****** 2025-09-19 00:52:39.133748 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.133752 | orchestrator | 2025-09-19 00:52:39.133756 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-19 00:52:39.133761 | orchestrator | Friday 19 September 2025 00:51:41 +0000 (0:00:00.742) 0:09:59.308 ****** 2025-09-19 00:52:39.133765 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.133769 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.133773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.133777 | orchestrator | 2025-09-19 00:52:39.133781 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-19 00:52:39.133785 | orchestrator | Friday 19 September 2025 00:51:42 +0000 (0:00:00.851) 0:10:00.160 ****** 2025-09-19 00:52:39.133789 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133794 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 00:52:39.133798 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133802 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 00:52:39.133806 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133810 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 00:52:39.133814 | orchestrator | 2025-09-19 00:52:39.133818 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 00:52:39.133827 | orchestrator | Friday 19 September 2025 00:51:47 +0000 (0:00:05.006) 0:10:05.166 ****** 2025-09-19 00:52:39.133831 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133835 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:52:39.133839 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133843 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:52:39.133847 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:52:39.133852 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:52:39.133856 | orchestrator | 2025-09-19 00:52:39.133860 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 00:52:39.133876 | orchestrator | Friday 19 September 2025 00:51:50 +0000 (0:00:03.134) 0:10:08.301 ****** 2025-09-19 00:52:39.133881 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 00:52:39.133885 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.133889 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 00:52:39.133893 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.133897 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 00:52:39.133901 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.133905 | orchestrator | 2025-09-19 00:52:39.133909 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-19 00:52:39.133915 | orchestrator | Friday 19 September 2025 00:51:52 +0000 (0:00:01.500) 0:10:09.801 ****** 2025-09-19 00:52:39.133920 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-19 00:52:39.133924 | orchestrator | 2025-09-19 00:52:39.133928 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-19 00:52:39.133934 | orchestrator | Friday 19 September 2025 00:51:52 +0000 (0:00:00.254) 0:10:10.056 ****** 2025-09-19 00:52:39.133938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133960 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.133964 | orchestrator | 2025-09-19 00:52:39.133968 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-19 00:52:39.133972 | orchestrator | Friday 19 September 2025 00:51:53 +0000 (0:00:00.659) 0:10:10.715 ****** 2025-09-19 00:52:39.133976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 00:52:39.133997 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.134005 | orchestrator | 2025-09-19 00:52:39.134009 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-19 00:52:39.134013 | orchestrator | Friday 19 September 2025 00:51:53 +0000 (0:00:00.685) 0:10:11.400 ****** 2025-09-19 00:52:39.134040 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 00:52:39.134044 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 00:52:39.134048 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 00:52:39.134052 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 00:52:39.134057 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 00:52:39.134061 | orchestrator | 2025-09-19 00:52:39.134065 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-19 00:52:39.134069 | orchestrator | Friday 19 September 2025 00:52:26 +0000 (0:00:32.481) 0:10:43.882 ****** 2025-09-19 00:52:39.134073 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.134077 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.134081 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.134085 | orchestrator | 2025-09-19 00:52:39.134089 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-19 00:52:39.134093 | orchestrator | Friday 19 September 2025 00:52:26 +0000 (0:00:00.316) 0:10:44.198 ****** 2025-09-19 00:52:39.134098 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.134102 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.134106 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.134110 | orchestrator | 2025-09-19 00:52:39.134114 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-19 00:52:39.134118 | orchestrator | Friday 19 September 2025 00:52:26 +0000 (0:00:00.288) 0:10:44.486 ****** 2025-09-19 00:52:39.134122 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.134126 | orchestrator | 2025-09-19 00:52:39.134130 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-19 00:52:39.134134 | orchestrator | Friday 19 September 2025 00:52:27 +0000 (0:00:00.758) 0:10:45.245 ****** 2025-09-19 00:52:39.134138 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.134143 | orchestrator | 2025-09-19 00:52:39.134149 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-19 00:52:39.134153 | orchestrator | Friday 19 September 2025 00:52:28 +0000 (0:00:00.501) 0:10:45.746 ****** 2025-09-19 00:52:39.134157 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.134162 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.134166 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.134170 | orchestrator | 2025-09-19 00:52:39.134176 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-19 00:52:39.134181 | orchestrator | Friday 19 September 2025 00:52:29 +0000 (0:00:01.571) 0:10:47.318 ****** 2025-09-19 00:52:39.134185 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.134189 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.134193 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.134197 | orchestrator | 2025-09-19 00:52:39.134201 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-19 00:52:39.134205 | orchestrator | Friday 19 September 2025 00:52:30 +0000 (0:00:01.222) 0:10:48.540 ****** 2025-09-19 00:52:39.134214 | orchestrator | changed: [testbed-node-3] 2025-09-19 00:52:39.134218 | orchestrator | changed: [testbed-node-4] 2025-09-19 00:52:39.134222 | orchestrator | changed: [testbed-node-5] 2025-09-19 00:52:39.134226 | orchestrator | 2025-09-19 00:52:39.134230 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-19 00:52:39.134234 | orchestrator | Friday 19 September 2025 00:52:32 +0000 (0:00:01.873) 0:10:50.414 ****** 2025-09-19 00:52:39.134238 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.134242 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.134246 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 00:52:39.134251 | orchestrator | 2025-09-19 00:52:39.134255 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 00:52:39.134259 | orchestrator | Friday 19 September 2025 00:52:35 +0000 (0:00:02.716) 0:10:53.131 ****** 2025-09-19 00:52:39.134263 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.134267 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.134271 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.134275 | orchestrator | 2025-09-19 00:52:39.134279 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 00:52:39.134283 | orchestrator | Friday 19 September 2025 00:52:35 +0000 (0:00:00.371) 0:10:53.503 ****** 2025-09-19 00:52:39.134287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:52:39.134291 | orchestrator | 2025-09-19 00:52:39.134295 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 00:52:39.134299 | orchestrator | Friday 19 September 2025 00:52:36 +0000 (0:00:00.770) 0:10:54.273 ****** 2025-09-19 00:52:39.134303 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.134307 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.134311 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.134315 | orchestrator | 2025-09-19 00:52:39.134319 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 00:52:39.134324 | orchestrator | Friday 19 September 2025 00:52:36 +0000 (0:00:00.339) 0:10:54.613 ****** 2025-09-19 00:52:39.134328 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.134332 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:52:39.134336 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:52:39.134340 | orchestrator | 2025-09-19 00:52:39.134344 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 00:52:39.134348 | orchestrator | Friday 19 September 2025 00:52:37 +0000 (0:00:00.325) 0:10:54.938 ****** 2025-09-19 00:52:39.134352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:52:39.134356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:52:39.134360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:52:39.134364 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:52:39.134368 | orchestrator | 2025-09-19 00:52:39.134372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 00:52:39.134376 | orchestrator | Friday 19 September 2025 00:52:38 +0000 (0:00:00.833) 0:10:55.771 ****** 2025-09-19 00:52:39.134380 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:52:39.134384 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:52:39.134388 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:52:39.134392 | orchestrator | 2025-09-19 00:52:39.134396 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:52:39.134400 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-19 00:52:39.134404 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-19 00:52:39.134412 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-19 00:52:39.134416 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-19 00:52:39.134420 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-19 00:52:39.134427 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-19 00:52:39.134431 | orchestrator | 2025-09-19 00:52:39.134435 | orchestrator | 2025-09-19 00:52:39.134439 | orchestrator | 2025-09-19 00:52:39.134443 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:52:39.134447 | orchestrator | Friday 19 September 2025 00:52:38 +0000 (0:00:00.236) 0:10:56.008 ****** 2025-09-19 00:52:39.134453 | orchestrator | =============================================================================== 2025-09-19 00:52:39.134458 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 46.99s 2025-09-19 00:52:39.134462 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 40.32s 2025-09-19 00:52:39.134466 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.48s 2025-09-19 00:52:39.134470 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.37s 2025-09-19 00:52:39.134474 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.22s 2025-09-19 00:52:39.134478 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.16s 2025-09-19 00:52:39.134482 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.55s 2025-09-19 00:52:39.134486 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.24s 2025-09-19 00:52:39.134490 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.51s 2025-09-19 00:52:39.134495 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.61s 2025-09-19 00:52:39.134499 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.05s 2025-09-19 00:52:39.134503 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.51s 2025-09-19 00:52:39.134507 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.01s 2025-09-19 00:52:39.134511 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.89s 2025-09-19 00:52:39.134515 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.34s 2025-09-19 00:52:39.134519 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.12s 2025-09-19 00:52:39.134523 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.98s 2025-09-19 00:52:39.134527 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.66s 2025-09-19 00:52:39.134531 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.61s 2025-09-19 00:52:39.134535 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.60s 2025-09-19 00:52:39.134539 | orchestrator | 2025-09-19 00:52:39 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:39.134543 | orchestrator | 2025-09-19 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:42.163368 | orchestrator | 2025-09-19 00:52:42 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:52:42.163492 | orchestrator | 2025-09-19 00:52:42 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:42.165120 | orchestrator | 2025-09-19 00:52:42 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:42.165198 | orchestrator | 2025-09-19 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:45.215150 | orchestrator | 2025-09-19 00:52:45 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:52:45.217843 | orchestrator | 2025-09-19 00:52:45 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:45.219724 | orchestrator | 2025-09-19 00:52:45 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:45.219754 | orchestrator | 2025-09-19 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:48.261996 | orchestrator | 2025-09-19 00:52:48 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:52:48.262441 | orchestrator | 2025-09-19 00:52:48 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:48.263600 | orchestrator | 2025-09-19 00:52:48 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:48.263629 | orchestrator | 2025-09-19 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:51.312220 | orchestrator | 2025-09-19 00:52:51 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:52:51.315330 | orchestrator | 2025-09-19 00:52:51 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:51.317069 | orchestrator | 2025-09-19 00:52:51 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:51.317109 | orchestrator | 2025-09-19 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:54.357056 | orchestrator | 2025-09-19 00:52:54 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:52:54.358591 | orchestrator | 2025-09-19 00:52:54 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:54.361878 | orchestrator | 2025-09-19 00:52:54 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:54.362090 | orchestrator | 2025-09-19 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:52:57.402273 | orchestrator | 2025-09-19 00:52:57 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:52:57.405757 | orchestrator | 2025-09-19 00:52:57 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:52:57.407765 | orchestrator | 2025-09-19 00:52:57 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:52:57.407983 | orchestrator | 2025-09-19 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:00.463834 | orchestrator | 2025-09-19 00:53:00 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:00.464661 | orchestrator | 2025-09-19 00:53:00 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:53:00.466875 | orchestrator | 2025-09-19 00:53:00 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:00.466927 | orchestrator | 2025-09-19 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:03.503467 | orchestrator | 2025-09-19 00:53:03 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:03.506442 | orchestrator | 2025-09-19 00:53:03 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state STARTED 2025-09-19 00:53:03.508904 | orchestrator | 2025-09-19 00:53:03 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:03.509380 | orchestrator | 2025-09-19 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:06.562441 | orchestrator | 2025-09-19 00:53:06 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:06.565237 | orchestrator | 2025-09-19 00:53:06 | INFO  | Task c798c2f1-f5f9-4e05-a9d4-0c34483ed745 is in state SUCCESS 2025-09-19 00:53:06.567327 | orchestrator | 2025-09-19 00:53:06.567373 | orchestrator | 2025-09-19 00:53:06.567385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:53:06.567396 | orchestrator | 2025-09-19 00:53:06.567405 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:53:06.567416 | orchestrator | Friday 19 September 2025 00:50:29 +0000 (0:00:00.310) 0:00:00.310 ****** 2025-09-19 00:53:06.567433 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:06.567458 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:06.567477 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:06.567494 | orchestrator | 2025-09-19 00:53:06.567509 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:53:06.567526 | orchestrator | Friday 19 September 2025 00:50:29 +0000 (0:00:00.286) 0:00:00.597 ****** 2025-09-19 00:53:06.567542 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-19 00:53:06.567558 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-19 00:53:06.567575 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-19 00:53:06.567592 | orchestrator | 2025-09-19 00:53:06.567607 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-19 00:53:06.567623 | orchestrator | 2025-09-19 00:53:06.567639 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 00:53:06.567656 | orchestrator | Friday 19 September 2025 00:50:29 +0000 (0:00:00.408) 0:00:01.005 ****** 2025-09-19 00:53:06.567674 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:53:06.567691 | orchestrator | 2025-09-19 00:53:06.567707 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-19 00:53:06.567724 | orchestrator | Friday 19 September 2025 00:50:30 +0000 (0:00:00.534) 0:00:01.540 ****** 2025-09-19 00:53:06.567740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 00:53:06.567758 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 00:53:06.567774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 00:53:06.567791 | orchestrator | 2025-09-19 00:53:06.567806 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-19 00:53:06.567817 | orchestrator | Friday 19 September 2025 00:50:32 +0000 (0:00:01.713) 0:00:03.254 ****** 2025-09-19 00:53:06.567872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.567890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.567939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.567955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.567975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.567988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568008 | orchestrator | 2025-09-19 00:53:06.568019 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 00:53:06.568031 | orchestrator | Friday 19 September 2025 00:50:34 +0000 (0:00:02.175) 0:00:05.429 ****** 2025-09-19 00:53:06.568042 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:53:06.568058 | orchestrator | 2025-09-19 00:53:06.568069 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-19 00:53:06.568080 | orchestrator | Friday 19 September 2025 00:50:34 +0000 (0:00:00.721) 0:00:06.151 ****** 2025-09-19 00:53:06.568100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568193 | orchestrator | 2025-09-19 00:53:06.568203 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-19 00:53:06.568213 | orchestrator | Friday 19 September 2025 00:50:38 +0000 (0:00:03.335) 0:00:09.487 ****** 2025-09-19 00:53:06.568228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:53:06.568244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:53:06.568255 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:06.568271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:53:06.568283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:53:06.568293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:06.568307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:53:06.568324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:53:06.568334 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:06.568344 | orchestrator | 2025-09-19 00:53:06.568353 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-19 00:53:06.568363 | orchestrator | Friday 19 September 2025 00:50:39 +0000 (0:00:01.215) 0:00:10.703 ****** 2025-09-19 00:53:06.568380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:53:06.568391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:53:06.568401 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:06.568411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:53:06.568432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:53:06.568442 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:06.568457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 00:53:06.568468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 00:53:06.568479 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:06.568488 | orchestrator | 2025-09-19 00:53:06.568498 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-19 00:53:06.568508 | orchestrator | Friday 19 September 2025 00:50:40 +0000 (0:00:00.837) 0:00:11.540 ****** 2025-09-19 00:53:06.568518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568618 | orchestrator | 2025-09-19 00:53:06.568627 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-19 00:53:06.568637 | orchestrator | Friday 19 September 2025 00:50:43 +0000 (0:00:02.857) 0:00:14.398 ****** 2025-09-19 00:53:06.568647 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:06.568657 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:06.568667 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:06.568676 | orchestrator | 2025-09-19 00:53:06.568686 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-19 00:53:06.568695 | orchestrator | Friday 19 September 2025 00:50:45 +0000 (0:00:02.734) 0:00:17.132 ****** 2025-09-19 00:53:06.568705 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:06.568714 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:06.568724 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:06.568733 | orchestrator | 2025-09-19 00:53:06.568743 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-19 00:53:06.568752 | orchestrator | Friday 19 September 2025 00:50:48 +0000 (0:00:02.136) 0:00:19.269 ****** 2025-09-19 00:53:06.568771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 00:53:06.568967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.568998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 00:53:06.569020 | orchestrator | 2025-09-19 00:53:06.569030 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 00:53:06.569040 | orchestrator | Friday 19 September 2025 00:50:49 +0000 (0:00:01.874) 0:00:21.144 ****** 2025-09-19 00:53:06.569049 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:06.569059 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:06.569075 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:06.569091 | orchestrator | 2025-09-19 00:53:06.569115 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 00:53:06.569133 | orchestrator | Friday 19 September 2025 00:50:50 +0000 (0:00:00.280) 0:00:21.424 ****** 2025-09-19 00:53:06.569149 | orchestrator | 2025-09-19 00:53:06.569164 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 00:53:06.569180 | orchestrator | Friday 19 September 2025 00:50:50 +0000 (0:00:00.067) 0:00:21.491 ****** 2025-09-19 00:53:06.569195 | orchestrator | 2025-09-19 00:53:06.569209 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 00:53:06.569224 | orchestrator | Friday 19 September 2025 00:50:50 +0000 (0:00:00.077) 0:00:21.569 ****** 2025-09-19 00:53:06.569239 | orchestrator | 2025-09-19 00:53:06.569253 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-19 00:53:06.569277 | orchestrator | Friday 19 September 2025 00:50:50 +0000 (0:00:00.314) 0:00:21.884 ****** 2025-09-19 00:53:06.569294 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:06.569310 | orchestrator | 2025-09-19 00:53:06.569326 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-19 00:53:06.569342 | orchestrator | Friday 19 September 2025 00:50:50 +0000 (0:00:00.204) 0:00:22.089 ****** 2025-09-19 00:53:06.569359 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:06.569374 | orchestrator | 2025-09-19 00:53:06.569391 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-19 00:53:06.569409 | orchestrator | Friday 19 September 2025 00:50:51 +0000 (0:00:00.206) 0:00:22.296 ****** 2025-09-19 00:53:06.569424 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:06.569439 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:06.569448 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:06.569458 | orchestrator | 2025-09-19 00:53:06.569467 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-19 00:53:06.569477 | orchestrator | Friday 19 September 2025 00:51:45 +0000 (0:00:54.871) 0:01:17.168 ****** 2025-09-19 00:53:06.569487 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:06.569496 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:06.569506 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:06.569515 | orchestrator | 2025-09-19 00:53:06.569525 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 00:53:06.569534 | orchestrator | Friday 19 September 2025 00:52:52 +0000 (0:01:06.914) 0:02:24.083 ****** 2025-09-19 00:53:06.569544 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:53:06.569554 | orchestrator | 2025-09-19 00:53:06.569565 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-19 00:53:06.569576 | orchestrator | Friday 19 September 2025 00:52:53 +0000 (0:00:00.570) 0:02:24.653 ****** 2025-09-19 00:53:06.569588 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:06.569599 | orchestrator | 2025-09-19 00:53:06.569619 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-19 00:53:06.569631 | orchestrator | Friday 19 September 2025 00:52:56 +0000 (0:00:02.613) 0:02:27.267 ****** 2025-09-19 00:53:06.569641 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:06.569652 | orchestrator | 2025-09-19 00:53:06.569663 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-19 00:53:06.569674 | orchestrator | Friday 19 September 2025 00:52:58 +0000 (0:00:02.342) 0:02:29.609 ****** 2025-09-19 00:53:06.569686 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:06.569696 | orchestrator | 2025-09-19 00:53:06.569707 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-19 00:53:06.569718 | orchestrator | Friday 19 September 2025 00:53:01 +0000 (0:00:02.836) 0:02:32.445 ****** 2025-09-19 00:53:06.569729 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:06.569741 | orchestrator | 2025-09-19 00:53:06.569761 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:53:06.569774 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:53:06.569787 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:53:06.569798 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 00:53:06.569810 | orchestrator | 2025-09-19 00:53:06.569821 | orchestrator | 2025-09-19 00:53:06.569957 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:53:06.569985 | orchestrator | Friday 19 September 2025 00:53:03 +0000 (0:00:02.652) 0:02:35.098 ****** 2025-09-19 00:53:06.569995 | orchestrator | =============================================================================== 2025-09-19 00:53:06.570005 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 66.91s 2025-09-19 00:53:06.570014 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.87s 2025-09-19 00:53:06.570072 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.34s 2025-09-19 00:53:06.570082 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.86s 2025-09-19 00:53:06.570092 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.84s 2025-09-19 00:53:06.570101 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.73s 2025-09-19 00:53:06.570111 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.65s 2025-09-19 00:53:06.570120 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.61s 2025-09-19 00:53:06.570129 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2025-09-19 00:53:06.570139 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.18s 2025-09-19 00:53:06.570148 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.14s 2025-09-19 00:53:06.570158 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.87s 2025-09-19 00:53:06.570166 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.71s 2025-09-19 00:53:06.570173 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.22s 2025-09-19 00:53:06.570181 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.84s 2025-09-19 00:53:06.570189 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2025-09-19 00:53:06.570197 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-09-19 00:53:06.570211 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-19 00:53:06.570219 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.46s 2025-09-19 00:53:06.570235 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-09-19 00:53:06.570243 | orchestrator | 2025-09-19 00:53:06 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:06.570251 | orchestrator | 2025-09-19 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:09.617705 | orchestrator | 2025-09-19 00:53:09 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:09.619725 | orchestrator | 2025-09-19 00:53:09 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:09.619810 | orchestrator | 2025-09-19 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:12.673010 | orchestrator | 2025-09-19 00:53:12 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:12.674132 | orchestrator | 2025-09-19 00:53:12 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:12.674303 | orchestrator | 2025-09-19 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:15.718071 | orchestrator | 2025-09-19 00:53:15 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:15.719811 | orchestrator | 2025-09-19 00:53:15 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:15.719877 | orchestrator | 2025-09-19 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:18.762301 | orchestrator | 2025-09-19 00:53:18 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:18.763366 | orchestrator | 2025-09-19 00:53:18 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:18.763495 | orchestrator | 2025-09-19 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:21.807664 | orchestrator | 2025-09-19 00:53:21 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:21.810546 | orchestrator | 2025-09-19 00:53:21 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:21.810641 | orchestrator | 2025-09-19 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:24.856106 | orchestrator | 2025-09-19 00:53:24 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:24.856783 | orchestrator | 2025-09-19 00:53:24 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:24.856850 | orchestrator | 2025-09-19 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:27.906338 | orchestrator | 2025-09-19 00:53:27 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:27.907996 | orchestrator | 2025-09-19 00:53:27 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:27.908073 | orchestrator | 2025-09-19 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:30.950599 | orchestrator | 2025-09-19 00:53:30 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:30.952230 | orchestrator | 2025-09-19 00:53:30 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:30.952260 | orchestrator | 2025-09-19 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:33.994095 | orchestrator | 2025-09-19 00:53:33 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:33.996084 | orchestrator | 2025-09-19 00:53:33 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:33.996155 | orchestrator | 2025-09-19 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:37.030132 | orchestrator | 2025-09-19 00:53:37 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:37.035060 | orchestrator | 2025-09-19 00:53:37 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state STARTED 2025-09-19 00:53:37.035140 | orchestrator | 2025-09-19 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:40.082846 | orchestrator | 2025-09-19 00:53:40 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:40.086234 | orchestrator | 2025-09-19 00:53:40 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:40.087129 | orchestrator | 2025-09-19 00:53:40 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:40.093369 | orchestrator | 2025-09-19 00:53:40.093421 | orchestrator | 2025-09-19 00:53:40.093442 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-19 00:53:40.093462 | orchestrator | 2025-09-19 00:53:40.093482 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 00:53:40.093503 | orchestrator | Friday 19 September 2025 00:50:28 +0000 (0:00:00.099) 0:00:00.099 ****** 2025-09-19 00:53:40.093524 | orchestrator | ok: [localhost] => { 2025-09-19 00:53:40.093545 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-19 00:53:40.093567 | orchestrator | } 2025-09-19 00:53:40.093588 | orchestrator | 2025-09-19 00:53:40.093609 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-19 00:53:40.093630 | orchestrator | Friday 19 September 2025 00:50:28 +0000 (0:00:00.050) 0:00:00.150 ****** 2025-09-19 00:53:40.093650 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-19 00:53:40.093672 | orchestrator | ...ignoring 2025-09-19 00:53:40.093694 | orchestrator | 2025-09-19 00:53:40.093715 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-19 00:53:40.093735 | orchestrator | Friday 19 September 2025 00:50:31 +0000 (0:00:02.883) 0:00:03.034 ****** 2025-09-19 00:53:40.093754 | orchestrator | skipping: [localhost] 2025-09-19 00:53:40.093766 | orchestrator | 2025-09-19 00:53:40.093777 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-19 00:53:40.093788 | orchestrator | Friday 19 September 2025 00:50:31 +0000 (0:00:00.046) 0:00:03.080 ****** 2025-09-19 00:53:40.093847 | orchestrator | ok: [localhost] 2025-09-19 00:53:40.093859 | orchestrator | 2025-09-19 00:53:40.093870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:53:40.093881 | orchestrator | 2025-09-19 00:53:40.093892 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:53:40.093903 | orchestrator | Friday 19 September 2025 00:50:31 +0000 (0:00:00.136) 0:00:03.217 ****** 2025-09-19 00:53:40.093914 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.093926 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.093937 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.093948 | orchestrator | 2025-09-19 00:53:40.093962 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:53:40.093974 | orchestrator | Friday 19 September 2025 00:50:32 +0000 (0:00:00.348) 0:00:03.565 ****** 2025-09-19 00:53:40.093987 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 00:53:40.094000 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 00:53:40.094013 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 00:53:40.094074 | orchestrator | 2025-09-19 00:53:40.094087 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 00:53:40.094099 | orchestrator | 2025-09-19 00:53:40.094113 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 00:53:40.094154 | orchestrator | Friday 19 September 2025 00:50:33 +0000 (0:00:01.123) 0:00:04.689 ****** 2025-09-19 00:53:40.094167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 00:53:40.094180 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 00:53:40.094193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 00:53:40.094205 | orchestrator | 2025-09-19 00:53:40.094218 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 00:53:40.094230 | orchestrator | Friday 19 September 2025 00:50:33 +0000 (0:00:00.369) 0:00:05.058 ****** 2025-09-19 00:53:40.094242 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:53:40.094256 | orchestrator | 2025-09-19 00:53:40.094268 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-19 00:53:40.094281 | orchestrator | Friday 19 September 2025 00:50:34 +0000 (0:00:00.603) 0:00:05.662 ****** 2025-09-19 00:53:40.094745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.094783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.094856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.094877 | orchestrator | 2025-09-19 00:53:40.094909 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-19 00:53:40.094927 | orchestrator | Friday 19 September 2025 00:50:37 +0000 (0:00:03.539) 0:00:09.201 ****** 2025-09-19 00:53:40.094946 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.094964 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.094982 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.095002 | orchestrator | 2025-09-19 00:53:40.095020 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-19 00:53:40.095090 | orchestrator | Friday 19 September 2025 00:50:38 +0000 (0:00:00.672) 0:00:09.874 ****** 2025-09-19 00:53:40.095105 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.095115 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.095126 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.095137 | orchestrator | 2025-09-19 00:53:40.095147 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-19 00:53:40.095158 | orchestrator | Friday 19 September 2025 00:50:40 +0000 (0:00:01.841) 0:00:11.715 ****** 2025-09-19 00:53:40.095171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.095224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.095243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.095262 | orchestrator | 2025-09-19 00:53:40.095275 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-19 00:53:40.095294 | orchestrator | Friday 19 September 2025 00:50:44 +0000 (0:00:03.714) 0:00:15.430 ****** 2025-09-19 00:53:40.095312 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.095332 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.095352 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.095373 | orchestrator | 2025-09-19 00:53:40.095388 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-19 00:53:40.095402 | orchestrator | Friday 19 September 2025 00:50:45 +0000 (0:00:01.257) 0:00:16.687 ****** 2025-09-19 00:53:40.095415 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:40.095427 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:40.095439 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.095451 | orchestrator | 2025-09-19 00:53:40.095464 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 00:53:40.095477 | orchestrator | Friday 19 September 2025 00:50:49 +0000 (0:00:04.017) 0:00:20.704 ****** 2025-09-19 00:53:40.095490 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:53:40.095508 | orchestrator | 2025-09-19 00:53:40.095521 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 00:53:40.095534 | orchestrator | Friday 19 September 2025 00:50:50 +0000 (0:00:00.567) 0:00:21.271 ****** 2025-09-19 00:53:40.095581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.095611 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.095623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.095639 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.095679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.095713 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.095733 | orchestrator | 2025-09-19 00:53:40.095753 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 00:53:40.095773 | orchestrator | Friday 19 September 2025 00:50:52 +0000 (0:00:02.512) 0:00:23.784 ****** 2025-09-19 00:53:40.095819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.095841 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.095873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.095901 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.095913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.095925 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.095935 | orchestrator | 2025-09-19 00:53:40.095946 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 00:53:40.095957 | orchestrator | Friday 19 September 2025 00:50:55 +0000 (0:00:02.841) 0:00:26.625 ****** 2025-09-19 00:53:40.096045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.096087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.096104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.096121 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.096139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 00:53:40.096158 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.096186 | orchestrator | 2025-09-19 00:53:40.096205 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-19 00:53:40.096230 | orchestrator | Friday 19 September 2025 00:50:58 +0000 (0:00:02.866) 0:00:29.491 ****** 2025-09-19 00:53:40.096261 | orchestrator | ch2025-09-19 00:53:40 | INFO  | Task 5f63fc68-4754-4e84-9f2b-2101a09bc8f9 is in state SUCCESS 2025-09-19 00:53:40.096281 | orchestrator | 2025-09-19 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:40.096301 | orchestrator | anged: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.096322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.096379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 00:53:40.096394 | orchestrator | 2025-09-19 00:53:40.096405 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-19 00:53:40.096416 | orchestrator | Friday 19 September 2025 00:51:01 +0000 (0:00:03.253) 0:00:32.745 ****** 2025-09-19 00:53:40.096431 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.096446 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:40.096457 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:40.096468 | orchestrator | 2025-09-19 00:53:40.096479 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-19 00:53:40.096490 | orchestrator | Friday 19 September 2025 00:51:02 +0000 (0:00:01.119) 0:00:33.864 ****** 2025-09-19 00:53:40.096503 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.096522 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.096541 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.096559 | orchestrator | 2025-09-19 00:53:40.096577 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-19 00:53:40.096597 | orchestrator | Friday 19 September 2025 00:51:02 +0000 (0:00:00.321) 0:00:34.186 ****** 2025-09-19 00:53:40.096615 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.096630 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.096649 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.096668 | orchestrator | 2025-09-19 00:53:40.096687 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-19 00:53:40.096700 | orchestrator | Friday 19 September 2025 00:51:03 +0000 (0:00:00.379) 0:00:34.566 ****** 2025-09-19 00:53:40.096712 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-19 00:53:40.096724 | orchestrator | ...ignoring 2025-09-19 00:53:40.096736 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-19 00:53:40.096747 | orchestrator | ...ignoring 2025-09-19 00:53:40.096771 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-19 00:53:40.096782 | orchestrator | ...ignoring 2025-09-19 00:53:40.096850 | orchestrator | 2025-09-19 00:53:40.096864 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-19 00:53:40.096875 | orchestrator | Friday 19 September 2025 00:51:14 +0000 (0:00:11.065) 0:00:45.632 ****** 2025-09-19 00:53:40.096886 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.096897 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.096908 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.096918 | orchestrator | 2025-09-19 00:53:40.096929 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-19 00:53:40.096940 | orchestrator | Friday 19 September 2025 00:51:15 +0000 (0:00:00.856) 0:00:46.488 ****** 2025-09-19 00:53:40.096951 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.096961 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.096972 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.096983 | orchestrator | 2025-09-19 00:53:40.096994 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-19 00:53:40.097005 | orchestrator | Friday 19 September 2025 00:51:15 +0000 (0:00:00.456) 0:00:46.945 ****** 2025-09-19 00:53:40.097015 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.097026 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.097037 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.097047 | orchestrator | 2025-09-19 00:53:40.097058 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-19 00:53:40.097076 | orchestrator | Friday 19 September 2025 00:51:16 +0000 (0:00:00.423) 0:00:47.368 ****** 2025-09-19 00:53:40.097095 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.097107 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.097117 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.097128 | orchestrator | 2025-09-19 00:53:40.097139 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-19 00:53:40.097150 | orchestrator | Friday 19 September 2025 00:51:16 +0000 (0:00:00.440) 0:00:47.808 ****** 2025-09-19 00:53:40.097160 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.097171 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.097185 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.097204 | orchestrator | 2025-09-19 00:53:40.097222 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-19 00:53:40.097240 | orchestrator | Friday 19 September 2025 00:51:17 +0000 (0:00:00.637) 0:00:48.446 ****** 2025-09-19 00:53:40.097258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.097276 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.097292 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.097307 | orchestrator | 2025-09-19 00:53:40.097323 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 00:53:40.097340 | orchestrator | Friday 19 September 2025 00:51:17 +0000 (0:00:00.436) 0:00:48.882 ****** 2025-09-19 00:53:40.097357 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.097374 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.097390 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-19 00:53:40.097407 | orchestrator | 2025-09-19 00:53:40.097423 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-19 00:53:40.097440 | orchestrator | Friday 19 September 2025 00:51:17 +0000 (0:00:00.369) 0:00:49.252 ****** 2025-09-19 00:53:40.097456 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.097472 | orchestrator | 2025-09-19 00:53:40.097490 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-19 00:53:40.097508 | orchestrator | Friday 19 September 2025 00:51:28 +0000 (0:00:10.097) 0:00:59.349 ****** 2025-09-19 00:53:40.097527 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.097544 | orchestrator | 2025-09-19 00:53:40.097564 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 00:53:40.097590 | orchestrator | Friday 19 September 2025 00:51:28 +0000 (0:00:00.124) 0:00:59.473 ****** 2025-09-19 00:53:40.097601 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.097612 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.097622 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.097633 | orchestrator | 2025-09-19 00:53:40.097644 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-19 00:53:40.097655 | orchestrator | Friday 19 September 2025 00:51:29 +0000 (0:00:01.043) 0:01:00.517 ****** 2025-09-19 00:53:40.097665 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.097676 | orchestrator | 2025-09-19 00:53:40.097687 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-19 00:53:40.097697 | orchestrator | Friday 19 September 2025 00:51:37 +0000 (0:00:08.164) 0:01:08.682 ****** 2025-09-19 00:53:40.097708 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.097719 | orchestrator | 2025-09-19 00:53:40.097730 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-19 00:53:40.097741 | orchestrator | Friday 19 September 2025 00:51:39 +0000 (0:00:01.674) 0:01:10.357 ****** 2025-09-19 00:53:40.097751 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.097762 | orchestrator | 2025-09-19 00:53:40.097773 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-19 00:53:40.097783 | orchestrator | Friday 19 September 2025 00:51:41 +0000 (0:00:02.577) 0:01:12.934 ****** 2025-09-19 00:53:40.097825 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.097842 | orchestrator | 2025-09-19 00:53:40.097853 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-19 00:53:40.097864 | orchestrator | Friday 19 September 2025 00:51:41 +0000 (0:00:00.129) 0:01:13.063 ****** 2025-09-19 00:53:40.097875 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.097886 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.097896 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.097907 | orchestrator | 2025-09-19 00:53:40.097917 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-19 00:53:40.097928 | orchestrator | Friday 19 September 2025 00:51:42 +0000 (0:00:00.521) 0:01:13.584 ****** 2025-09-19 00:53:40.097939 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.097950 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 00:53:40.097961 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:40.097972 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:40.097982 | orchestrator | 2025-09-19 00:53:40.097993 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 00:53:40.098004 | orchestrator | skipping: no hosts matched 2025-09-19 00:53:40.098079 | orchestrator | 2025-09-19 00:53:40.098095 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 00:53:40.098106 | orchestrator | 2025-09-19 00:53:40.098117 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 00:53:40.098127 | orchestrator | Friday 19 September 2025 00:51:42 +0000 (0:00:00.360) 0:01:13.945 ****** 2025-09-19 00:53:40.098138 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:53:40.098149 | orchestrator | 2025-09-19 00:53:40.098160 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 00:53:40.098171 | orchestrator | Friday 19 September 2025 00:52:01 +0000 (0:00:18.699) 0:01:32.645 ****** 2025-09-19 00:53:40.098181 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.098192 | orchestrator | 2025-09-19 00:53:40.098203 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 00:53:40.098214 | orchestrator | Friday 19 September 2025 00:52:21 +0000 (0:00:20.569) 0:01:53.214 ****** 2025-09-19 00:53:40.098224 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.098235 | orchestrator | 2025-09-19 00:53:40.098246 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 00:53:40.098265 | orchestrator | 2025-09-19 00:53:40.098276 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 00:53:40.098306 | orchestrator | Friday 19 September 2025 00:52:24 +0000 (0:00:02.483) 0:01:55.698 ****** 2025-09-19 00:53:40.098318 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:53:40.098329 | orchestrator | 2025-09-19 00:53:40.098339 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 00:53:40.098350 | orchestrator | Friday 19 September 2025 00:52:48 +0000 (0:00:24.450) 0:02:20.148 ****** 2025-09-19 00:53:40.098361 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.098372 | orchestrator | 2025-09-19 00:53:40.098383 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 00:53:40.098393 | orchestrator | Friday 19 September 2025 00:53:04 +0000 (0:00:15.606) 0:02:35.755 ****** 2025-09-19 00:53:40.098404 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.098415 | orchestrator | 2025-09-19 00:53:40.098425 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 00:53:40.098436 | orchestrator | 2025-09-19 00:53:40.098447 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 00:53:40.098457 | orchestrator | Friday 19 September 2025 00:53:07 +0000 (0:00:02.744) 0:02:38.499 ****** 2025-09-19 00:53:40.098468 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.098479 | orchestrator | 2025-09-19 00:53:40.098490 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 00:53:40.098500 | orchestrator | Friday 19 September 2025 00:53:23 +0000 (0:00:15.857) 0:02:54.356 ****** 2025-09-19 00:53:40.098511 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.098522 | orchestrator | 2025-09-19 00:53:40.098532 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 00:53:40.098543 | orchestrator | Friday 19 September 2025 00:53:23 +0000 (0:00:00.588) 0:02:54.945 ****** 2025-09-19 00:53:40.098555 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.098573 | orchestrator | 2025-09-19 00:53:40.098592 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 00:53:40.098611 | orchestrator | 2025-09-19 00:53:40.098629 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 00:53:40.098647 | orchestrator | Friday 19 September 2025 00:53:26 +0000 (0:00:02.377) 0:02:57.322 ****** 2025-09-19 00:53:40.098665 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:53:40.098683 | orchestrator | 2025-09-19 00:53:40.098702 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-19 00:53:40.098720 | orchestrator | Friday 19 September 2025 00:53:26 +0000 (0:00:00.556) 0:02:57.878 ****** 2025-09-19 00:53:40.098738 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.098756 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.098776 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.098821 | orchestrator | 2025-09-19 00:53:40.098843 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-19 00:53:40.098861 | orchestrator | Friday 19 September 2025 00:53:29 +0000 (0:00:02.426) 0:03:00.305 ****** 2025-09-19 00:53:40.098880 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.098891 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.098902 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.098912 | orchestrator | 2025-09-19 00:53:40.098923 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-19 00:53:40.098934 | orchestrator | Friday 19 September 2025 00:53:31 +0000 (0:00:02.065) 0:03:02.371 ****** 2025-09-19 00:53:40.098945 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.098955 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.098966 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.098976 | orchestrator | 2025-09-19 00:53:40.098987 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-19 00:53:40.098998 | orchestrator | Friday 19 September 2025 00:53:33 +0000 (0:00:02.187) 0:03:04.559 ****** 2025-09-19 00:53:40.099019 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.099030 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.099040 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:53:40.099051 | orchestrator | 2025-09-19 00:53:40.099061 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-19 00:53:40.099072 | orchestrator | Friday 19 September 2025 00:53:35 +0000 (0:00:02.190) 0:03:06.749 ****** 2025-09-19 00:53:40.099083 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:53:40.099094 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:53:40.099104 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:53:40.099115 | orchestrator | 2025-09-19 00:53:40.099125 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 00:53:40.099136 | orchestrator | Friday 19 September 2025 00:53:38 +0000 (0:00:02.648) 0:03:09.398 ****** 2025-09-19 00:53:40.099147 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:53:40.099157 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:53:40.099168 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:53:40.099178 | orchestrator | 2025-09-19 00:53:40.099189 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:53:40.099201 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 00:53:40.099213 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-19 00:53:40.099225 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 00:53:40.099235 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 00:53:40.099246 | orchestrator | 2025-09-19 00:53:40.099257 | orchestrator | 2025-09-19 00:53:40.099267 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:53:40.099278 | orchestrator | Friday 19 September 2025 00:53:38 +0000 (0:00:00.222) 0:03:09.621 ****** 2025-09-19 00:53:40.099306 | orchestrator | =============================================================================== 2025-09-19 00:53:40.099319 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.15s 2025-09-19 00:53:40.099330 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.18s 2025-09-19 00:53:40.099340 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.86s 2025-09-19 00:53:40.099351 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.07s 2025-09-19 00:53:40.099361 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.10s 2025-09-19 00:53:40.099372 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.16s 2025-09-19 00:53:40.099383 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.23s 2025-09-19 00:53:40.099393 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.02s 2025-09-19 00:53:40.099404 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.71s 2025-09-19 00:53:40.099414 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.54s 2025-09-19 00:53:40.099425 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.25s 2025-09-19 00:53:40.099435 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2025-09-19 00:53:40.099446 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.87s 2025-09-19 00:53:40.099457 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.84s 2025-09-19 00:53:40.099467 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.65s 2025-09-19 00:53:40.099478 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.58s 2025-09-19 00:53:40.099495 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.51s 2025-09-19 00:53:40.099512 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.43s 2025-09-19 00:53:40.099532 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.38s 2025-09-19 00:53:40.099558 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.19s 2025-09-19 00:53:43.135930 | orchestrator | 2025-09-19 00:53:43 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:43.136559 | orchestrator | 2025-09-19 00:53:43 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:43.138782 | orchestrator | 2025-09-19 00:53:43 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:43.138845 | orchestrator | 2025-09-19 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:46.176404 | orchestrator | 2025-09-19 00:53:46 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:46.178557 | orchestrator | 2025-09-19 00:53:46 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:46.180362 | orchestrator | 2025-09-19 00:53:46 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:46.180465 | orchestrator | 2025-09-19 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:49.221438 | orchestrator | 2025-09-19 00:53:49 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:49.225992 | orchestrator | 2025-09-19 00:53:49 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:49.226639 | orchestrator | 2025-09-19 00:53:49 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:49.226679 | orchestrator | 2025-09-19 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:52.267571 | orchestrator | 2025-09-19 00:53:52 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:52.267846 | orchestrator | 2025-09-19 00:53:52 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:52.269032 | orchestrator | 2025-09-19 00:53:52 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:52.269061 | orchestrator | 2025-09-19 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:55.301030 | orchestrator | 2025-09-19 00:53:55 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:55.302257 | orchestrator | 2025-09-19 00:53:55 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:55.303180 | orchestrator | 2025-09-19 00:53:55 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:55.303273 | orchestrator | 2025-09-19 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:53:58.341195 | orchestrator | 2025-09-19 00:53:58 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:53:58.342067 | orchestrator | 2025-09-19 00:53:58 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:53:58.343828 | orchestrator | 2025-09-19 00:53:58 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:53:58.343868 | orchestrator | 2025-09-19 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:01.377590 | orchestrator | 2025-09-19 00:54:01 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:01.378649 | orchestrator | 2025-09-19 00:54:01 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:01.379995 | orchestrator | 2025-09-19 00:54:01 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:01.380020 | orchestrator | 2025-09-19 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:04.414832 | orchestrator | 2025-09-19 00:54:04 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:04.415447 | orchestrator | 2025-09-19 00:54:04 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:04.417195 | orchestrator | 2025-09-19 00:54:04 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:04.417468 | orchestrator | 2025-09-19 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:07.464105 | orchestrator | 2025-09-19 00:54:07 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:07.464205 | orchestrator | 2025-09-19 00:54:07 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:07.464220 | orchestrator | 2025-09-19 00:54:07 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:07.464232 | orchestrator | 2025-09-19 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:10.509974 | orchestrator | 2025-09-19 00:54:10 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:10.511687 | orchestrator | 2025-09-19 00:54:10 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:10.513223 | orchestrator | 2025-09-19 00:54:10 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:10.513871 | orchestrator | 2025-09-19 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:13.547443 | orchestrator | 2025-09-19 00:54:13 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:13.547562 | orchestrator | 2025-09-19 00:54:13 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:13.548447 | orchestrator | 2025-09-19 00:54:13 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:13.549290 | orchestrator | 2025-09-19 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:16.597178 | orchestrator | 2025-09-19 00:54:16 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:16.598572 | orchestrator | 2025-09-19 00:54:16 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:16.600367 | orchestrator | 2025-09-19 00:54:16 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:16.600387 | orchestrator | 2025-09-19 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:19.639919 | orchestrator | 2025-09-19 00:54:19 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:19.641311 | orchestrator | 2025-09-19 00:54:19 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:19.643324 | orchestrator | 2025-09-19 00:54:19 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:19.643351 | orchestrator | 2025-09-19 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:22.681859 | orchestrator | 2025-09-19 00:54:22 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:22.683035 | orchestrator | 2025-09-19 00:54:22 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:22.685298 | orchestrator | 2025-09-19 00:54:22 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:22.685328 | orchestrator | 2025-09-19 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:25.734597 | orchestrator | 2025-09-19 00:54:25 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:25.736431 | orchestrator | 2025-09-19 00:54:25 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:25.739144 | orchestrator | 2025-09-19 00:54:25 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:25.739175 | orchestrator | 2025-09-19 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:28.780901 | orchestrator | 2025-09-19 00:54:28 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:28.781599 | orchestrator | 2025-09-19 00:54:28 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:28.782456 | orchestrator | 2025-09-19 00:54:28 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:28.782485 | orchestrator | 2025-09-19 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:31.820631 | orchestrator | 2025-09-19 00:54:31 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:31.821957 | orchestrator | 2025-09-19 00:54:31 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:31.822592 | orchestrator | 2025-09-19 00:54:31 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:31.822621 | orchestrator | 2025-09-19 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:34.860553 | orchestrator | 2025-09-19 00:54:34 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:34.860652 | orchestrator | 2025-09-19 00:54:34 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:34.861421 | orchestrator | 2025-09-19 00:54:34 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:34.861445 | orchestrator | 2025-09-19 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:37.908963 | orchestrator | 2025-09-19 00:54:37 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:37.911561 | orchestrator | 2025-09-19 00:54:37 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:37.914356 | orchestrator | 2025-09-19 00:54:37 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:37.914401 | orchestrator | 2025-09-19 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:40.961533 | orchestrator | 2025-09-19 00:54:40 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:40.963899 | orchestrator | 2025-09-19 00:54:40 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:40.966949 | orchestrator | 2025-09-19 00:54:40 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:40.967183 | orchestrator | 2025-09-19 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:44.019379 | orchestrator | 2025-09-19 00:54:44 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:44.020650 | orchestrator | 2025-09-19 00:54:44 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:44.022559 | orchestrator | 2025-09-19 00:54:44 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:44.022703 | orchestrator | 2025-09-19 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:47.072228 | orchestrator | 2025-09-19 00:54:47 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:47.074859 | orchestrator | 2025-09-19 00:54:47 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:47.077608 | orchestrator | 2025-09-19 00:54:47 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:47.077646 | orchestrator | 2025-09-19 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:50.121183 | orchestrator | 2025-09-19 00:54:50 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state STARTED 2025-09-19 00:54:50.123359 | orchestrator | 2025-09-19 00:54:50 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:50.124773 | orchestrator | 2025-09-19 00:54:50 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:50.124819 | orchestrator | 2025-09-19 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:53.173818 | orchestrator | 2025-09-19 00:54:53 | INFO  | Task e356d281-d255-487a-ab9e-773751a924b4 is in state SUCCESS 2025-09-19 00:54:53.176546 | orchestrator | 2025-09-19 00:54:53.177653 | orchestrator | 2025-09-19 00:54:53.177704 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-19 00:54:53.177757 | orchestrator | 2025-09-19 00:54:53.177769 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 00:54:53.177779 | orchestrator | Friday 19 September 2025 00:52:43 +0000 (0:00:00.600) 0:00:00.600 ****** 2025-09-19 00:54:53.177790 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:54:53.177801 | orchestrator | 2025-09-19 00:54:53.177810 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 00:54:53.177820 | orchestrator | Friday 19 September 2025 00:52:43 +0000 (0:00:00.633) 0:00:01.234 ****** 2025-09-19 00:54:53.177830 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.177841 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.177895 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.177929 | orchestrator | 2025-09-19 00:54:53.177939 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 00:54:53.177949 | orchestrator | Friday 19 September 2025 00:52:44 +0000 (0:00:00.654) 0:00:01.889 ****** 2025-09-19 00:54:53.177959 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.177969 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.177978 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.177988 | orchestrator | 2025-09-19 00:54:53.177998 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 00:54:53.178008 | orchestrator | Friday 19 September 2025 00:52:44 +0000 (0:00:00.273) 0:00:02.162 ****** 2025-09-19 00:54:53.178067 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.178078 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.178088 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.178097 | orchestrator | 2025-09-19 00:54:53.178107 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 00:54:53.178117 | orchestrator | Friday 19 September 2025 00:52:45 +0000 (0:00:00.799) 0:00:02.962 ****** 2025-09-19 00:54:53.178135 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.178152 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.178167 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.178183 | orchestrator | 2025-09-19 00:54:53.178199 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 00:54:53.178219 | orchestrator | Friday 19 September 2025 00:52:45 +0000 (0:00:00.325) 0:00:03.287 ****** 2025-09-19 00:54:53.178238 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.178275 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.178287 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.178298 | orchestrator | 2025-09-19 00:54:53.178309 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 00:54:53.178320 | orchestrator | Friday 19 September 2025 00:52:46 +0000 (0:00:00.297) 0:00:03.585 ****** 2025-09-19 00:54:53.178331 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.178341 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.178352 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.178363 | orchestrator | 2025-09-19 00:54:53.178375 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 00:54:53.178386 | orchestrator | Friday 19 September 2025 00:52:46 +0000 (0:00:00.305) 0:00:03.891 ****** 2025-09-19 00:54:53.178397 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.178409 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.178420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.178430 | orchestrator | 2025-09-19 00:54:53.178441 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 00:54:53.178452 | orchestrator | Friday 19 September 2025 00:52:47 +0000 (0:00:00.473) 0:00:04.365 ****** 2025-09-19 00:54:53.178463 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.178473 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.178484 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.178496 | orchestrator | 2025-09-19 00:54:53.178506 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 00:54:53.178517 | orchestrator | Friday 19 September 2025 00:52:47 +0000 (0:00:00.323) 0:00:04.688 ****** 2025-09-19 00:54:53.178528 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 00:54:53.178539 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:54:53.178550 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:54:53.178560 | orchestrator | 2025-09-19 00:54:53.178571 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 00:54:53.178582 | orchestrator | Friday 19 September 2025 00:52:47 +0000 (0:00:00.616) 0:00:05.304 ****** 2025-09-19 00:54:53.178592 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.178603 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.178613 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.178622 | orchestrator | 2025-09-19 00:54:53.178632 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 00:54:53.178641 | orchestrator | Friday 19 September 2025 00:52:48 +0000 (0:00:00.431) 0:00:05.736 ****** 2025-09-19 00:54:53.178651 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 00:54:53.178660 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:54:53.178669 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:54:53.178679 | orchestrator | 2025-09-19 00:54:53.178688 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 00:54:53.178698 | orchestrator | Friday 19 September 2025 00:52:50 +0000 (0:00:02.352) 0:00:08.088 ****** 2025-09-19 00:54:53.178707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 00:54:53.178746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 00:54:53.178756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 00:54:53.178765 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.178775 | orchestrator | 2025-09-19 00:54:53.178785 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 00:54:53.178859 | orchestrator | Friday 19 September 2025 00:52:51 +0000 (0:00:00.400) 0:00:08.489 ****** 2025-09-19 00:54:53.178874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.178896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.178906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.178916 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.178926 | orchestrator | 2025-09-19 00:54:53.178935 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 00:54:53.178945 | orchestrator | Friday 19 September 2025 00:52:51 +0000 (0:00:00.794) 0:00:09.284 ****** 2025-09-19 00:54:53.178957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.178969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.178979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.178989 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.178999 | orchestrator | 2025-09-19 00:54:53.179009 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 00:54:53.179018 | orchestrator | Friday 19 September 2025 00:52:52 +0000 (0:00:00.159) 0:00:09.444 ****** 2025-09-19 00:54:53.179031 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1701691ebc8f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 00:52:49.060676', 'end': '2025-09-19 00:52:49.108839', 'delta': '0:00:00.048163', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1701691ebc8f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-19 00:54:53.179044 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '63a48d2327f0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 00:52:49.938246', 'end': '2025-09-19 00:52:49.975445', 'delta': '0:00:00.037199', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['63a48d2327f0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-19 00:54:53.179097 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1ced7f02289d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 00:52:50.569994', 'end': '2025-09-19 00:52:50.604790', 'delta': '0:00:00.034796', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ced7f02289d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-19 00:54:53.179110 | orchestrator | 2025-09-19 00:54:53.179120 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 00:54:53.179137 | orchestrator | Friday 19 September 2025 00:52:52 +0000 (0:00:00.324) 0:00:09.769 ****** 2025-09-19 00:54:53.179154 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.179171 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.179187 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.179205 | orchestrator | 2025-09-19 00:54:53.179222 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 00:54:53.179236 | orchestrator | Friday 19 September 2025 00:52:52 +0000 (0:00:00.381) 0:00:10.150 ****** 2025-09-19 00:54:53.179245 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-19 00:54:53.179255 | orchestrator | 2025-09-19 00:54:53.179264 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 00:54:53.179274 | orchestrator | Friday 19 September 2025 00:52:54 +0000 (0:00:01.635) 0:00:11.786 ****** 2025-09-19 00:54:53.179284 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179293 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179303 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179312 | orchestrator | 2025-09-19 00:54:53.179322 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 00:54:53.179332 | orchestrator | Friday 19 September 2025 00:52:54 +0000 (0:00:00.288) 0:00:12.075 ****** 2025-09-19 00:54:53.179341 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179351 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179360 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179370 | orchestrator | 2025-09-19 00:54:53.179379 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 00:54:53.179389 | orchestrator | Friday 19 September 2025 00:52:55 +0000 (0:00:00.371) 0:00:12.446 ****** 2025-09-19 00:54:53.179398 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179408 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179418 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179427 | orchestrator | 2025-09-19 00:54:53.179437 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 00:54:53.179446 | orchestrator | Friday 19 September 2025 00:52:55 +0000 (0:00:00.390) 0:00:12.837 ****** 2025-09-19 00:54:53.179456 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.179465 | orchestrator | 2025-09-19 00:54:53.179475 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 00:54:53.179485 | orchestrator | Friday 19 September 2025 00:52:55 +0000 (0:00:00.125) 0:00:12.962 ****** 2025-09-19 00:54:53.179494 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179504 | orchestrator | 2025-09-19 00:54:53.179513 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 00:54:53.179523 | orchestrator | Friday 19 September 2025 00:52:55 +0000 (0:00:00.199) 0:00:13.162 ****** 2025-09-19 00:54:53.179532 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179542 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179581 | orchestrator | 2025-09-19 00:54:53.179591 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 00:54:53.179600 | orchestrator | Friday 19 September 2025 00:52:56 +0000 (0:00:00.246) 0:00:13.408 ****** 2025-09-19 00:54:53.179610 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179620 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179629 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179638 | orchestrator | 2025-09-19 00:54:53.179648 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 00:54:53.179658 | orchestrator | Friday 19 September 2025 00:52:56 +0000 (0:00:00.263) 0:00:13.672 ****** 2025-09-19 00:54:53.179667 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179677 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179686 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179695 | orchestrator | 2025-09-19 00:54:53.179705 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 00:54:53.179735 | orchestrator | Friday 19 September 2025 00:52:56 +0000 (0:00:00.390) 0:00:14.063 ****** 2025-09-19 00:54:53.179746 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179755 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179765 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179774 | orchestrator | 2025-09-19 00:54:53.179784 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 00:54:53.179793 | orchestrator | Friday 19 September 2025 00:52:56 +0000 (0:00:00.280) 0:00:14.344 ****** 2025-09-19 00:54:53.179803 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179813 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179822 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179832 | orchestrator | 2025-09-19 00:54:53.179841 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 00:54:53.179851 | orchestrator | Friday 19 September 2025 00:52:57 +0000 (0:00:00.268) 0:00:14.612 ****** 2025-09-19 00:54:53.179860 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179870 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179879 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179889 | orchestrator | 2025-09-19 00:54:53.179899 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 00:54:53.179948 | orchestrator | Friday 19 September 2025 00:52:57 +0000 (0:00:00.280) 0:00:14.893 ****** 2025-09-19 00:54:53.179960 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.179970 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.179979 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.179988 | orchestrator | 2025-09-19 00:54:53.179998 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 00:54:53.180007 | orchestrator | Friday 19 September 2025 00:52:57 +0000 (0:00:00.393) 0:00:15.287 ****** 2025-09-19 00:54:53.180018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c', 'dm-uuid-LVM-peC7EuXhUExYM0OH9W5LUB0gTfq5Mn8XZy9S1dInyYzQKePf1K4F5F6btSROVcVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e', 'dm-uuid-LVM-0kq9LsH3khMJXJBPflnAmhtw6k1LWcFzdDuhao44bI7HhFDExFCqRk8a5Qivdga7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKNIfe-zvQ5-lQVM-MW32-ccVT-C3aW-1GkH9A', 'scsi-0QEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d', 'scsi-SQEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110', 'dm-uuid-LVM-QN79jZEdFpP77x7qseaJoi73CZZdfAzmIlGGe0MpjgLncX42KretcJTX8BTrz4ED'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ENgUQe-8clb-uPlh-t6js-QVpE-6mC2-oty0V6', 'scsi-0QEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f', 'scsi-SQEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b', 'dm-uuid-LVM-KZZmEP1zkNZJvI2exmJffXX1NUziEioMheeu9yKxf1jgKqdEs9cMHQIipJtMU6aq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402', 'scsi-SQEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180414 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.180424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wu4HNx-Ix3l-9Lrf-RNoI-j8Qb-7eYo-keRwP1', 'scsi-0QEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3', 'scsi-SQEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LH0ZRy-8fTh-qjKT-TbcL-BpOd-D3RO-A7MJtR', 'scsi-0QEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521', 'scsi-SQEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe', 'scsi-SQEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180559 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.180568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546', 'dm-uuid-LVM-lfAlIdHrcDtGyKUEF5i0CQ7AW9WuYdFAvIs32dguFQnfBxTP0vlKeXjJ6EmldXOP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd', 'dm-uuid-LVM-2H1nJgTXIAlKzWZYKQGW3oGBiSW0fcaFILONhV774LItMWxXgUUO6WPV1hxOidff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 00:54:53.180699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JV9It9-RrIQ-nRF5-y62U-tOHg-Lev3-DHJjFv', 'scsi-0QEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4', 'scsi-SQEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aLKzS5-GI8w-bf2n-GZAt-rqsY-9oL4-1Oti50', 'scsi-0QEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd', 'scsi-SQEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576', 'scsi-SQEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 00:54:53.180815 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.180825 | orchestrator | 2025-09-19 00:54:53.180835 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 00:54:53.180844 | orchestrator | Friday 19 September 2025 00:52:58 +0000 (0:00:00.506) 0:00:15.794 ****** 2025-09-19 00:54:53.180855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c', 'dm-uuid-LVM-peC7EuXhUExYM0OH9W5LUB0gTfq5Mn8XZy9S1dInyYzQKePf1K4F5F6btSROVcVd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e', 'dm-uuid-LVM-0kq9LsH3khMJXJBPflnAmhtw6k1LWcFzdDuhao44bI7HhFDExFCqRk8a5Qivdga7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110', 'dm-uuid-LVM-QN79jZEdFpP77x7qseaJoi73CZZdfAzmIlGGe0MpjgLncX42KretcJTX8BTrz4ED'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.180995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b', 'dm-uuid-LVM-KZZmEP1zkNZJvI2exmJffXX1NUziEioMheeu9yKxf1jgKqdEs9cMHQIipJtMU6aq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16', 'scsi-SQEMU_QEMU_HARDDISK_55973005-cab9-4651-a089-f76828fe5b13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181034 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bc7aa585--dea2--57c4--a9fa--18818632dc3c-osd--block--bc7aa585--dea2--57c4--a9fa--18818632dc3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKNIfe-zvQ5-lQVM-MW32-ccVT-C3aW-1GkH9A', 'scsi-0QEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d', 'scsi-SQEMU_QEMU_HARDDISK_5095dff0-407e-4b8b-811f-a3c5cd55a16d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ba978b90--a663--5d0c--8f05--4b4e8986f79e-osd--block--ba978b90--a663--5d0c--8f05--4b4e8986f79e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ENgUQe-8clb-uPlh-t6js-QVpE-6mC2-oty0V6', 'scsi-0QEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f', 'scsi-SQEMU_QEMU_HARDDISK_7d2555f8-8f26-4f5e-8b79-cd121c4d405f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402', 'scsi-SQEMU_QEMU_HARDDISK_ace41295-549a-4643-92eb-07daa5f39402'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181156 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.181174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181190 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546', 'dm-uuid-LVM-lfAlIdHrcDtGyKUEF5i0CQ7AW9WuYdFAvIs32dguFQnfBxTP0vlKeXjJ6EmldXOP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd', 'dm-uuid-LVM-2H1nJgTXIAlKzWZYKQGW3oGBiSW0fcaFILONhV774LItMWxXgUUO6WPV1hxOidff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181292 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16', 'scsi-SQEMU_QEMU_HARDDISK_3adbf97e-ee72-4483-9697-646cf4299ea9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7c9f8b51--166c--5055--bfcb--65abe80d3110-osd--block--7c9f8b51--166c--5055--bfcb--65abe80d3110'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wu4HNx-Ix3l-9Lrf-RNoI-j8Qb-7eYo-keRwP1', 'scsi-0QEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3', 'scsi-SQEMU_QEMU_HARDDISK_94fdce60-5769-46af-b883-c01ec9bbc4f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181389 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--25e4de26--ffd2--5ba5--a3e7--287c918a347b-osd--block--25e4de26--ffd2--5ba5--a3e7--287c918a347b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LH0ZRy-8fTh-qjKT-TbcL-BpOd-D3RO-A7MJtR', 'scsi-0QEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521', 'scsi-SQEMU_QEMU_HARDDISK_7d861b66-423b-4a73-89d0-4a2393a19521'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181406 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe', 'scsi-SQEMU_QEMU_HARDDISK_b274d452-dc05-477a-a838-600cb81e7cbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181497 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.181507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cbc511-46f7-41b8-8fa9-930abf7265d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9c5ae36c--b075--5e22--9b23--69e08de6e546-osd--block--9c5ae36c--b075--5e22--9b23--69e08de6e546'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JV9It9-RrIQ-nRF5-y62U-tOHg-Lev3-DHJjFv', 'scsi-0QEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4', 'scsi-SQEMU_QEMU_HARDDISK_5c96df58-7556-4413-84d6-ffa963b8d5b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3271a5cd--b931--506b--9a72--a7bc6b6b65fd-osd--block--3271a5cd--b931--506b--9a72--a7bc6b6b65fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aLKzS5-GI8w-bf2n-GZAt-rqsY-9oL4-1Oti50', 'scsi-0QEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd', 'scsi-SQEMU_QEMU_HARDDISK_037340a3-0b4d-471e-9cf4-4052731628bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576', 'scsi-SQEMU_QEMU_HARDDISK_253dac68-3781-42b7-8d02-e83cc46bb576'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 00:54:53.181627 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.181637 | orchestrator | 2025-09-19 00:54:53.181646 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 00:54:53.181656 | orchestrator | Friday 19 September 2025 00:52:58 +0000 (0:00:00.499) 0:00:16.293 ****** 2025-09-19 00:54:53.181666 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.181676 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.181685 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.181695 | orchestrator | 2025-09-19 00:54:53.181705 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 00:54:53.181741 | orchestrator | Friday 19 September 2025 00:52:59 +0000 (0:00:00.698) 0:00:16.991 ****** 2025-09-19 00:54:53.181752 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.181761 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.181771 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.181780 | orchestrator | 2025-09-19 00:54:53.181790 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 00:54:53.181799 | orchestrator | Friday 19 September 2025 00:53:00 +0000 (0:00:00.454) 0:00:17.446 ****** 2025-09-19 00:54:53.181809 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.181818 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.181828 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.181837 | orchestrator | 2025-09-19 00:54:53.181847 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 00:54:53.181856 | orchestrator | Friday 19 September 2025 00:53:01 +0000 (0:00:01.629) 0:00:19.075 ****** 2025-09-19 00:54:53.181866 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.181876 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.181885 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.181895 | orchestrator | 2025-09-19 00:54:53.181904 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 00:54:53.181914 | orchestrator | Friday 19 September 2025 00:53:02 +0000 (0:00:00.287) 0:00:19.363 ****** 2025-09-19 00:54:53.181924 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.181933 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.181942 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.181952 | orchestrator | 2025-09-19 00:54:53.181961 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 00:54:53.181979 | orchestrator | Friday 19 September 2025 00:53:02 +0000 (0:00:00.406) 0:00:19.770 ****** 2025-09-19 00:54:53.181989 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.181998 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.182008 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.182047 | orchestrator | 2025-09-19 00:54:53.182059 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 00:54:53.182068 | orchestrator | Friday 19 September 2025 00:53:02 +0000 (0:00:00.492) 0:00:20.262 ****** 2025-09-19 00:54:53.182078 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 00:54:53.182088 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 00:54:53.182097 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 00:54:53.182107 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 00:54:53.182116 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 00:54:53.182132 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 00:54:53.182148 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 00:54:53.182164 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 00:54:53.182179 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 00:54:53.182196 | orchestrator | 2025-09-19 00:54:53.182213 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 00:54:53.182230 | orchestrator | Friday 19 September 2025 00:53:03 +0000 (0:00:00.846) 0:00:21.109 ****** 2025-09-19 00:54:53.182247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 00:54:53.182263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 00:54:53.182276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 00:54:53.182285 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182295 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 00:54:53.182304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 00:54:53.182314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 00:54:53.182323 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.182333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 00:54:53.182342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 00:54:53.182351 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 00:54:53.182361 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.182370 | orchestrator | 2025-09-19 00:54:53.182380 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 00:54:53.182389 | orchestrator | Friday 19 September 2025 00:53:04 +0000 (0:00:00.361) 0:00:21.470 ****** 2025-09-19 00:54:53.182400 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 00:54:53.182409 | orchestrator | 2025-09-19 00:54:53.182419 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 00:54:53.182430 | orchestrator | Friday 19 September 2025 00:53:04 +0000 (0:00:00.814) 0:00:22.285 ****** 2025-09-19 00:54:53.182439 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182449 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.182458 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.182468 | orchestrator | 2025-09-19 00:54:53.182491 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 00:54:53.182501 | orchestrator | Friday 19 September 2025 00:53:05 +0000 (0:00:00.326) 0:00:22.611 ****** 2025-09-19 00:54:53.182511 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182521 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.182530 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.182539 | orchestrator | 2025-09-19 00:54:53.182549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 00:54:53.182566 | orchestrator | Friday 19 September 2025 00:53:05 +0000 (0:00:00.313) 0:00:22.925 ****** 2025-09-19 00:54:53.182575 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182585 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.182594 | orchestrator | skipping: [testbed-node-5] 2025-09-19 00:54:53.182603 | orchestrator | 2025-09-19 00:54:53.182613 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 00:54:53.182622 | orchestrator | Friday 19 September 2025 00:53:05 +0000 (0:00:00.336) 0:00:23.262 ****** 2025-09-19 00:54:53.182632 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.182641 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.182651 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.182660 | orchestrator | 2025-09-19 00:54:53.182670 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 00:54:53.182679 | orchestrator | Friday 19 September 2025 00:53:06 +0000 (0:00:00.600) 0:00:23.863 ****** 2025-09-19 00:54:53.182689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:54:53.182698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:54:53.182707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:54:53.182772 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182782 | orchestrator | 2025-09-19 00:54:53.182792 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 00:54:53.182801 | orchestrator | Friday 19 September 2025 00:53:06 +0000 (0:00:00.380) 0:00:24.244 ****** 2025-09-19 00:54:53.182811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:54:53.182820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:54:53.182829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:54:53.182839 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182849 | orchestrator | 2025-09-19 00:54:53.182858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 00:54:53.182868 | orchestrator | Friday 19 September 2025 00:53:07 +0000 (0:00:00.364) 0:00:24.608 ****** 2025-09-19 00:54:53.182877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 00:54:53.182887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 00:54:53.182896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 00:54:53.182905 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.182915 | orchestrator | 2025-09-19 00:54:53.182924 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 00:54:53.182933 | orchestrator | Friday 19 September 2025 00:53:07 +0000 (0:00:00.362) 0:00:24.970 ****** 2025-09-19 00:54:53.182943 | orchestrator | ok: [testbed-node-3] 2025-09-19 00:54:53.182952 | orchestrator | ok: [testbed-node-4] 2025-09-19 00:54:53.182962 | orchestrator | ok: [testbed-node-5] 2025-09-19 00:54:53.182971 | orchestrator | 2025-09-19 00:54:53.182980 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 00:54:53.182990 | orchestrator | Friday 19 September 2025 00:53:07 +0000 (0:00:00.307) 0:00:25.277 ****** 2025-09-19 00:54:53.182999 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 00:54:53.183009 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 00:54:53.183018 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 00:54:53.183027 | orchestrator | 2025-09-19 00:54:53.183037 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 00:54:53.183046 | orchestrator | Friday 19 September 2025 00:53:08 +0000 (0:00:00.481) 0:00:25.759 ****** 2025-09-19 00:54:53.183056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 00:54:53.183065 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:54:53.183075 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:54:53.183094 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 00:54:53.183104 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 00:54:53.183114 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 00:54:53.183127 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 00:54:53.183145 | orchestrator | 2025-09-19 00:54:53.183162 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 00:54:53.183179 | orchestrator | Friday 19 September 2025 00:53:09 +0000 (0:00:00.967) 0:00:26.727 ****** 2025-09-19 00:54:53.183197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 00:54:53.183214 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 00:54:53.183227 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 00:54:53.183236 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 00:54:53.183246 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 00:54:53.183255 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 00:54:53.183265 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 00:54:53.183274 | orchestrator | 2025-09-19 00:54:53.183298 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-19 00:54:53.183308 | orchestrator | Friday 19 September 2025 00:53:11 +0000 (0:00:01.921) 0:00:28.648 ****** 2025-09-19 00:54:53.183317 | orchestrator | skipping: [testbed-node-3] 2025-09-19 00:54:53.183324 | orchestrator | skipping: [testbed-node-4] 2025-09-19 00:54:53.183332 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-19 00:54:53.183340 | orchestrator | 2025-09-19 00:54:53.183348 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-19 00:54:53.183355 | orchestrator | Friday 19 September 2025 00:53:11 +0000 (0:00:00.392) 0:00:29.041 ****** 2025-09-19 00:54:53.183364 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:54:53.183373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:54:53.183381 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:54:53.183389 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:54:53.183397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 00:54:53.183405 | orchestrator | 2025-09-19 00:54:53.183413 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-19 00:54:53.183421 | orchestrator | Friday 19 September 2025 00:53:56 +0000 (0:00:45.180) 0:01:14.222 ****** 2025-09-19 00:54:53.183435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183442 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183450 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183458 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183481 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-19 00:54:53.183488 | orchestrator | 2025-09-19 00:54:53.183496 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-19 00:54:53.183504 | orchestrator | Friday 19 September 2025 00:54:21 +0000 (0:00:24.753) 0:01:38.975 ****** 2025-09-19 00:54:53.183512 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183520 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183527 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183535 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183543 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183550 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183559 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 00:54:53.183566 | orchestrator | 2025-09-19 00:54:53.183574 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-19 00:54:53.183582 | orchestrator | Friday 19 September 2025 00:54:34 +0000 (0:00:12.517) 0:01:51.493 ****** 2025-09-19 00:54:53.183589 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183597 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:54:53.183605 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:54:53.183613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183621 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:54:53.183629 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:54:53.183645 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183654 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:54:53.183661 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:54:53.183672 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183686 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:54:53.183699 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:54:53.183766 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183782 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:54:53.183793 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:54:53.183804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 00:54:53.183816 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 00:54:53.183827 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 00:54:53.183848 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-19 00:54:53.183861 | orchestrator | 2025-09-19 00:54:53.183873 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:54:53.183884 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-19 00:54:53.183898 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 00:54:53.183912 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 00:54:53.183926 | orchestrator | 2025-09-19 00:54:53.183940 | orchestrator | 2025-09-19 00:54:53.183953 | orchestrator | 2025-09-19 00:54:53.183967 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:54:53.183976 | orchestrator | Friday 19 September 2025 00:54:52 +0000 (0:00:18.571) 0:02:10.064 ****** 2025-09-19 00:54:53.183984 | orchestrator | =============================================================================== 2025-09-19 00:54:53.183992 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.18s 2025-09-19 00:54:53.184000 | orchestrator | generate keys ---------------------------------------------------------- 24.75s 2025-09-19 00:54:53.184008 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.57s 2025-09-19 00:54:53.184015 | orchestrator | get keys from monitors ------------------------------------------------- 12.52s 2025-09-19 00:54:53.184023 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.35s 2025-09-19 00:54:53.184031 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.92s 2025-09-19 00:54:53.184038 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.64s 2025-09-19 00:54:53.184046 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.63s 2025-09-19 00:54:53.184054 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2025-09-19 00:54:53.184062 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-09-19 00:54:53.184070 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.81s 2025-09-19 00:54:53.184077 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-09-19 00:54:53.184085 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2025-09-19 00:54:53.184093 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2025-09-19 00:54:53.184101 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2025-09-19 00:54:53.184108 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2025-09-19 00:54:53.184116 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-09-19 00:54:53.184128 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-09-19 00:54:53.184141 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.51s 2025-09-19 00:54:53.184154 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.50s 2025-09-19 00:54:53.184167 | orchestrator | 2025-09-19 00:54:53 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:53.184182 | orchestrator | 2025-09-19 00:54:53 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:53.184195 | orchestrator | 2025-09-19 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:56.234822 | orchestrator | 2025-09-19 00:54:56 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:56.236360 | orchestrator | 2025-09-19 00:54:56 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:56.238969 | orchestrator | 2025-09-19 00:54:56 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:54:56.239131 | orchestrator | 2025-09-19 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:54:59.291205 | orchestrator | 2025-09-19 00:54:59 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:54:59.293227 | orchestrator | 2025-09-19 00:54:59 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:54:59.294866 | orchestrator | 2025-09-19 00:54:59 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:54:59.294913 | orchestrator | 2025-09-19 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:02.336884 | orchestrator | 2025-09-19 00:55:02 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:02.339474 | orchestrator | 2025-09-19 00:55:02 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:02.341482 | orchestrator | 2025-09-19 00:55:02 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:02.342069 | orchestrator | 2025-09-19 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:05.389165 | orchestrator | 2025-09-19 00:55:05 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:05.390500 | orchestrator | 2025-09-19 00:55:05 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:05.391780 | orchestrator | 2025-09-19 00:55:05 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:05.391818 | orchestrator | 2025-09-19 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:08.442846 | orchestrator | 2025-09-19 00:55:08 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:08.443724 | orchestrator | 2025-09-19 00:55:08 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:08.445715 | orchestrator | 2025-09-19 00:55:08 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:08.445924 | orchestrator | 2025-09-19 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:11.486161 | orchestrator | 2025-09-19 00:55:11 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:11.487924 | orchestrator | 2025-09-19 00:55:11 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:11.490771 | orchestrator | 2025-09-19 00:55:11 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:11.490803 | orchestrator | 2025-09-19 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:14.528608 | orchestrator | 2025-09-19 00:55:14 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:14.529822 | orchestrator | 2025-09-19 00:55:14 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:14.530835 | orchestrator | 2025-09-19 00:55:14 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:14.530859 | orchestrator | 2025-09-19 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:17.591668 | orchestrator | 2025-09-19 00:55:17 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:17.594838 | orchestrator | 2025-09-19 00:55:17 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:17.597107 | orchestrator | 2025-09-19 00:55:17 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:17.597195 | orchestrator | 2025-09-19 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:20.645373 | orchestrator | 2025-09-19 00:55:20 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:20.646742 | orchestrator | 2025-09-19 00:55:20 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state STARTED 2025-09-19 00:55:20.649393 | orchestrator | 2025-09-19 00:55:20 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state STARTED 2025-09-19 00:55:20.649529 | orchestrator | 2025-09-19 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:23.697602 | orchestrator | 2025-09-19 00:55:23 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:23.698673 | orchestrator | 2025-09-19 00:55:23 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:23.701228 | orchestrator | 2025-09-19 00:55:23 | INFO  | Task 9b55f784-6f6c-44ec-8244-461ddb7dd2c0 is in state SUCCESS 2025-09-19 00:55:23.701569 | orchestrator | 2025-09-19 00:55:23.703282 | orchestrator | 2025-09-19 00:55:23.703321 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:55:23.703335 | orchestrator | 2025-09-19 00:55:23.703347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:55:23.703359 | orchestrator | Friday 19 September 2025 00:53:42 +0000 (0:00:00.271) 0:00:00.271 ****** 2025-09-19 00:55:23.703370 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.703383 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.703395 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.703406 | orchestrator | 2025-09-19 00:55:23.703417 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:55:23.703428 | orchestrator | Friday 19 September 2025 00:53:42 +0000 (0:00:00.297) 0:00:00.568 ****** 2025-09-19 00:55:23.703439 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-19 00:55:23.703451 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-19 00:55:23.703462 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-19 00:55:23.703474 | orchestrator | 2025-09-19 00:55:23.703485 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-19 00:55:23.703496 | orchestrator | 2025-09-19 00:55:23.703507 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 00:55:23.703519 | orchestrator | Friday 19 September 2025 00:53:43 +0000 (0:00:00.441) 0:00:01.009 ****** 2025-09-19 00:55:23.703531 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:55:23.703542 | orchestrator | 2025-09-19 00:55:23.703553 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-19 00:55:23.703564 | orchestrator | Friday 19 September 2025 00:53:43 +0000 (0:00:00.471) 0:00:01.481 ****** 2025-09-19 00:55:23.703582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.703983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.704083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.704110 | orchestrator | 2025-09-19 00:55:23.704122 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-19 00:55:23.704133 | orchestrator | Friday 19 September 2025 00:53:45 +0000 (0:00:01.244) 0:00:02.725 ****** 2025-09-19 00:55:23.704144 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.704155 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.704166 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.704176 | orchestrator | 2025-09-19 00:55:23.704187 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 00:55:23.704198 | orchestrator | Friday 19 September 2025 00:53:45 +0000 (0:00:00.419) 0:00:03.145 ****** 2025-09-19 00:55:23.704223 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 00:55:23.704235 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 00:55:23.704247 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 00:55:23.704258 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 00:55:23.704268 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 00:55:23.704279 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 00:55:23.704290 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-19 00:55:23.704300 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 00:55:23.704311 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 00:55:23.704321 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 00:55:23.704332 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 00:55:23.704342 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 00:55:23.704353 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 00:55:23.704364 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 00:55:23.704374 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-19 00:55:23.704385 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 00:55:23.704403 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 00:55:23.704414 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 00:55:23.704425 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 00:55:23.704435 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 00:55:23.704446 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 00:55:23.704456 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 00:55:23.704467 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-19 00:55:23.704477 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 00:55:23.704490 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-19 00:55:23.704502 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-19 00:55:23.704513 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-19 00:55:23.704524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-19 00:55:23.704534 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-19 00:55:23.704545 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-19 00:55:23.704556 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-19 00:55:23.704566 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-19 00:55:23.704577 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-19 00:55:23.704590 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-19 00:55:23.704600 | orchestrator | 2025-09-19 00:55:23.704611 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.704622 | orchestrator | Friday 19 September 2025 00:53:46 +0000 (0:00:00.782) 0:00:03.928 ****** 2025-09-19 00:55:23.704633 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.704644 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.704655 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.704665 | orchestrator | 2025-09-19 00:55:23.704703 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.704733 | orchestrator | Friday 19 September 2025 00:53:46 +0000 (0:00:00.279) 0:00:04.208 ****** 2025-09-19 00:55:23.704765 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.704786 | orchestrator | 2025-09-19 00:55:23.704800 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.704813 | orchestrator | Friday 19 September 2025 00:53:46 +0000 (0:00:00.142) 0:00:04.350 ****** 2025-09-19 00:55:23.704827 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.704839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.704851 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.704863 | orchestrator | 2025-09-19 00:55:23.704875 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.704895 | orchestrator | Friday 19 September 2025 00:53:47 +0000 (0:00:00.455) 0:00:04.806 ****** 2025-09-19 00:55:23.704907 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.704919 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.704932 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.704943 | orchestrator | 2025-09-19 00:55:23.704956 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.704967 | orchestrator | Friday 19 September 2025 00:53:47 +0000 (0:00:00.315) 0:00:05.121 ****** 2025-09-19 00:55:23.704979 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.704991 | orchestrator | 2025-09-19 00:55:23.705004 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.705017 | orchestrator | Friday 19 September 2025 00:53:47 +0000 (0:00:00.121) 0:00:05.243 ****** 2025-09-19 00:55:23.705029 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705041 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.705054 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.705066 | orchestrator | 2025-09-19 00:55:23.705077 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.705087 | orchestrator | Friday 19 September 2025 00:53:47 +0000 (0:00:00.261) 0:00:05.504 ****** 2025-09-19 00:55:23.705098 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.705109 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.705119 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.705130 | orchestrator | 2025-09-19 00:55:23.705141 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.705151 | orchestrator | Friday 19 September 2025 00:53:48 +0000 (0:00:00.302) 0:00:05.806 ****** 2025-09-19 00:55:23.705162 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705173 | orchestrator | 2025-09-19 00:55:23.705183 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.705194 | orchestrator | Friday 19 September 2025 00:53:48 +0000 (0:00:00.373) 0:00:06.180 ****** 2025-09-19 00:55:23.705205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705215 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.705226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.705236 | orchestrator | 2025-09-19 00:55:23.705247 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.705258 | orchestrator | Friday 19 September 2025 00:53:48 +0000 (0:00:00.308) 0:00:06.488 ****** 2025-09-19 00:55:23.705268 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.705279 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.705290 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.705300 | orchestrator | 2025-09-19 00:55:23.705311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.705322 | orchestrator | Friday 19 September 2025 00:53:49 +0000 (0:00:00.323) 0:00:06.811 ****** 2025-09-19 00:55:23.705333 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705343 | orchestrator | 2025-09-19 00:55:23.705354 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.705364 | orchestrator | Friday 19 September 2025 00:53:49 +0000 (0:00:00.126) 0:00:06.938 ****** 2025-09-19 00:55:23.705375 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705386 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.705396 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.705407 | orchestrator | 2025-09-19 00:55:23.705417 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.705428 | orchestrator | Friday 19 September 2025 00:53:49 +0000 (0:00:00.284) 0:00:07.222 ****** 2025-09-19 00:55:23.705439 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.705450 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.705460 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.705471 | orchestrator | 2025-09-19 00:55:23.705481 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.705499 | orchestrator | Friday 19 September 2025 00:53:50 +0000 (0:00:00.518) 0:00:07.740 ****** 2025-09-19 00:55:23.705510 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705521 | orchestrator | 2025-09-19 00:55:23.705532 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.705542 | orchestrator | Friday 19 September 2025 00:53:50 +0000 (0:00:00.126) 0:00:07.867 ****** 2025-09-19 00:55:23.705553 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705564 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.705574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.705585 | orchestrator | 2025-09-19 00:55:23.705595 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.705606 | orchestrator | Friday 19 September 2025 00:53:50 +0000 (0:00:00.272) 0:00:08.140 ****** 2025-09-19 00:55:23.705616 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.705627 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.705638 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.705649 | orchestrator | 2025-09-19 00:55:23.705659 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.705670 | orchestrator | Friday 19 September 2025 00:53:50 +0000 (0:00:00.292) 0:00:08.432 ****** 2025-09-19 00:55:23.705702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705714 | orchestrator | 2025-09-19 00:55:23.705724 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.705735 | orchestrator | Friday 19 September 2025 00:53:50 +0000 (0:00:00.135) 0:00:08.568 ****** 2025-09-19 00:55:23.705746 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705756 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.705767 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.705777 | orchestrator | 2025-09-19 00:55:23.705794 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.705812 | orchestrator | Friday 19 September 2025 00:53:51 +0000 (0:00:00.473) 0:00:09.042 ****** 2025-09-19 00:55:23.705823 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.705834 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.705844 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.705855 | orchestrator | 2025-09-19 00:55:23.705865 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.705876 | orchestrator | Friday 19 September 2025 00:53:51 +0000 (0:00:00.312) 0:00:09.354 ****** 2025-09-19 00:55:23.705887 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705897 | orchestrator | 2025-09-19 00:55:23.705908 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.705919 | orchestrator | Friday 19 September 2025 00:53:51 +0000 (0:00:00.117) 0:00:09.472 ****** 2025-09-19 00:55:23.705929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.705940 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.705950 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.705961 | orchestrator | 2025-09-19 00:55:23.705971 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.705982 | orchestrator | Friday 19 September 2025 00:53:52 +0000 (0:00:00.286) 0:00:09.758 ****** 2025-09-19 00:55:23.705993 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.706003 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.706014 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.706080 | orchestrator | 2025-09-19 00:55:23.706091 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.706102 | orchestrator | Friday 19 September 2025 00:53:52 +0000 (0:00:00.325) 0:00:10.084 ****** 2025-09-19 00:55:23.706113 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706124 | orchestrator | 2025-09-19 00:55:23.706134 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.706145 | orchestrator | Friday 19 September 2025 00:53:52 +0000 (0:00:00.134) 0:00:10.218 ****** 2025-09-19 00:55:23.706156 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.706186 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.706197 | orchestrator | 2025-09-19 00:55:23.706207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.706218 | orchestrator | Friday 19 September 2025 00:53:53 +0000 (0:00:00.523) 0:00:10.742 ****** 2025-09-19 00:55:23.706229 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.706240 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.706251 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.706261 | orchestrator | 2025-09-19 00:55:23.706272 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.706283 | orchestrator | Friday 19 September 2025 00:53:53 +0000 (0:00:00.320) 0:00:11.063 ****** 2025-09-19 00:55:23.706294 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706305 | orchestrator | 2025-09-19 00:55:23.706316 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.706327 | orchestrator | Friday 19 September 2025 00:53:53 +0000 (0:00:00.116) 0:00:11.179 ****** 2025-09-19 00:55:23.706338 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706349 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.706359 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.706370 | orchestrator | 2025-09-19 00:55:23.706381 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 00:55:23.706392 | orchestrator | Friday 19 September 2025 00:53:53 +0000 (0:00:00.280) 0:00:11.460 ****** 2025-09-19 00:55:23.706402 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:55:23.706413 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:55:23.706424 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:55:23.706435 | orchestrator | 2025-09-19 00:55:23.706445 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 00:55:23.706456 | orchestrator | Friday 19 September 2025 00:53:54 +0000 (0:00:00.498) 0:00:11.959 ****** 2025-09-19 00:55:23.706467 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706478 | orchestrator | 2025-09-19 00:55:23.706489 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 00:55:23.706499 | orchestrator | Friday 19 September 2025 00:53:54 +0000 (0:00:00.132) 0:00:12.091 ****** 2025-09-19 00:55:23.706510 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706521 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.706532 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.706542 | orchestrator | 2025-09-19 00:55:23.706553 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-19 00:55:23.706564 | orchestrator | Friday 19 September 2025 00:53:54 +0000 (0:00:00.294) 0:00:12.386 ****** 2025-09-19 00:55:23.706575 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:55:23.706585 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:55:23.706596 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:55:23.706607 | orchestrator | 2025-09-19 00:55:23.706617 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-19 00:55:23.706628 | orchestrator | Friday 19 September 2025 00:53:56 +0000 (0:00:01.677) 0:00:14.064 ****** 2025-09-19 00:55:23.706639 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 00:55:23.706650 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 00:55:23.706661 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 00:55:23.706672 | orchestrator | 2025-09-19 00:55:23.706711 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-19 00:55:23.706723 | orchestrator | Friday 19 September 2025 00:53:58 +0000 (0:00:02.071) 0:00:16.135 ****** 2025-09-19 00:55:23.706734 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 00:55:23.706745 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 00:55:23.706763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 00:55:23.706774 | orchestrator | 2025-09-19 00:55:23.706798 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-19 00:55:23.706809 | orchestrator | Friday 19 September 2025 00:54:00 +0000 (0:00:02.284) 0:00:18.420 ****** 2025-09-19 00:55:23.706820 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 00:55:23.706831 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 00:55:23.706841 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 00:55:23.706852 | orchestrator | 2025-09-19 00:55:23.706863 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-19 00:55:23.706873 | orchestrator | Friday 19 September 2025 00:54:02 +0000 (0:00:01.559) 0:00:19.979 ****** 2025-09-19 00:55:23.706884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706894 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.706905 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.706915 | orchestrator | 2025-09-19 00:55:23.706926 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-19 00:55:23.706937 | orchestrator | Friday 19 September 2025 00:54:02 +0000 (0:00:00.334) 0:00:20.314 ****** 2025-09-19 00:55:23.706948 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.706958 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.706969 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.706979 | orchestrator | 2025-09-19 00:55:23.706990 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 00:55:23.707000 | orchestrator | Friday 19 September 2025 00:54:02 +0000 (0:00:00.287) 0:00:20.602 ****** 2025-09-19 00:55:23.707011 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:55:23.707022 | orchestrator | 2025-09-19 00:55:23.707033 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-19 00:55:23.707043 | orchestrator | Friday 19 September 2025 00:54:03 +0000 (0:00:00.766) 0:00:21.368 ****** 2025-09-19 00:55:23.707056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.707099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.707113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.707131 | orchestrator | 2025-09-19 00:55:23.707142 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-19 00:55:23.707152 | orchestrator | Friday 19 September 2025 00:54:05 +0000 (0:00:01.620) 0:00:22.988 ****** 2025-09-19 00:55:23.707178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:55:23.707191 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.707216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:55:23.707234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.707246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:55:23.707257 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.707268 | orchestrator | 2025-09-19 00:55:23.707279 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-19 00:55:23.707289 | orchestrator | Friday 19 September 2025 00:54:05 +0000 (0:00:00.641) 0:00:23.629 ****** 2025-09-19 00:55:23.707314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:55:23.707336 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.707348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:55:23.707366 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.707391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 00:55:23.707404 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.707415 | orchestrator | 2025-09-19 00:55:23.707426 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-19 00:55:23.707437 | orchestrator | Friday 19 September 2025 00:54:07 +0000 (0:00:01.240) 0:00:24.870 ****** 2025-09-19 00:55:23.707449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.707481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.707495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 00:55:23.707513 | orchestrator | 2025-09-19 00:55:23.707524 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 00:55:23.707535 | orchestrator | Friday 19 September 2025 00:54:08 +0000 (0:00:01.239) 0:00:26.110 ****** 2025-09-19 00:55:23.707546 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:55:23.707556 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:55:23.707567 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:55:23.707577 | orchestrator | 2025-09-19 00:55:23.707588 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 00:55:23.707610 | orchestrator | Friday 19 September 2025 00:54:08 +0000 (0:00:00.313) 0:00:26.423 ****** 2025-09-19 00:55:23.707621 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:55:23.707632 | orchestrator | 2025-09-19 00:55:23.707643 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-19 00:55:23.707654 | orchestrator | Friday 19 September 2025 00:54:09 +0000 (0:00:00.729) 0:00:27.152 ****** 2025-09-19 00:55:23.707664 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:55:23.707698 | orchestrator | 2025-09-19 00:55:23.707712 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-19 00:55:23.707723 | orchestrator | Friday 19 September 2025 00:54:11 +0000 (0:00:02.273) 0:00:29.426 ****** 2025-09-19 00:55:23.707734 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:55:23.707745 | orchestrator | 2025-09-19 00:55:23.707755 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-19 00:55:23.707766 | orchestrator | Friday 19 September 2025 00:54:13 +0000 (0:00:02.225) 0:00:31.652 ****** 2025-09-19 00:55:23.707777 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:55:23.707787 | orchestrator | 2025-09-19 00:55:23.707798 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 00:55:23.707809 | orchestrator | Friday 19 September 2025 00:54:30 +0000 (0:00:16.301) 0:00:47.953 ****** 2025-09-19 00:55:23.707820 | orchestrator | 2025-09-19 00:55:23.707830 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 00:55:23.707841 | orchestrator | Friday 19 September 2025 00:54:30 +0000 (0:00:00.069) 0:00:48.023 ****** 2025-09-19 00:55:23.707852 | orchestrator | 2025-09-19 00:55:23.707862 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 00:55:23.707873 | orchestrator | Friday 19 September 2025 00:54:30 +0000 (0:00:00.069) 0:00:48.092 ****** 2025-09-19 00:55:23.707883 | orchestrator | 2025-09-19 00:55:23.707894 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-19 00:55:23.707905 | orchestrator | Friday 19 September 2025 00:54:30 +0000 (0:00:00.068) 0:00:48.161 ****** 2025-09-19 00:55:23.707927 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:55:23.707938 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:55:23.707948 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:55:23.707959 | orchestrator | 2025-09-19 00:55:23.707970 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:55:23.707981 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-19 00:55:23.707992 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 00:55:23.708003 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 00:55:23.708014 | orchestrator | 2025-09-19 00:55:23.708024 | orchestrator | 2025-09-19 00:55:23.708035 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:55:23.708045 | orchestrator | Friday 19 September 2025 00:55:21 +0000 (0:00:51.500) 0:01:39.661 ****** 2025-09-19 00:55:23.708056 | orchestrator | =============================================================================== 2025-09-19 00:55:23.708067 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.50s 2025-09-19 00:55:23.708077 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.30s 2025-09-19 00:55:23.708088 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.28s 2025-09-19 00:55:23.708099 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.27s 2025-09-19 00:55:23.708109 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.23s 2025-09-19 00:55:23.708120 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.07s 2025-09-19 00:55:23.708130 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.68s 2025-09-19 00:55:23.708141 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.62s 2025-09-19 00:55:23.708151 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.56s 2025-09-19 00:55:23.708162 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.24s 2025-09-19 00:55:23.708173 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.24s 2025-09-19 00:55:23.708183 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.24s 2025-09-19 00:55:23.708194 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-09-19 00:55:23.708204 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-09-19 00:55:23.708215 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-09-19 00:55:23.708225 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2025-09-19 00:55:23.708236 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-09-19 00:55:23.708247 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-09-19 00:55:23.708257 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-19 00:55:23.708268 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2025-09-19 00:55:23.708278 | orchestrator | 2025-09-19 00:55:23 | INFO  | Task 474336ce-1f3d-4c97-821d-e2daf8656011 is in state SUCCESS 2025-09-19 00:55:23.708301 | orchestrator | 2025-09-19 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:26.753208 | orchestrator | 2025-09-19 00:55:26 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:26.755006 | orchestrator | 2025-09-19 00:55:26 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:26.755072 | orchestrator | 2025-09-19 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:29.802376 | orchestrator | 2025-09-19 00:55:29 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:29.804481 | orchestrator | 2025-09-19 00:55:29 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:29.804532 | orchestrator | 2025-09-19 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:32.849174 | orchestrator | 2025-09-19 00:55:32 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:32.849948 | orchestrator | 2025-09-19 00:55:32 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:32.849993 | orchestrator | 2025-09-19 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:35.891959 | orchestrator | 2025-09-19 00:55:35 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:35.892502 | orchestrator | 2025-09-19 00:55:35 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:35.892521 | orchestrator | 2025-09-19 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:38.933095 | orchestrator | 2025-09-19 00:55:38 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:38.934275 | orchestrator | 2025-09-19 00:55:38 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:38.934302 | orchestrator | 2025-09-19 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:41.981050 | orchestrator | 2025-09-19 00:55:41 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:41.983386 | orchestrator | 2025-09-19 00:55:41 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:41.983470 | orchestrator | 2025-09-19 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:45.031812 | orchestrator | 2025-09-19 00:55:45 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:45.032883 | orchestrator | 2025-09-19 00:55:45 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:45.032922 | orchestrator | 2025-09-19 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:48.074757 | orchestrator | 2025-09-19 00:55:48 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:48.077455 | orchestrator | 2025-09-19 00:55:48 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:48.077520 | orchestrator | 2025-09-19 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:51.115572 | orchestrator | 2025-09-19 00:55:51 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:51.117694 | orchestrator | 2025-09-19 00:55:51 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:51.118171 | orchestrator | 2025-09-19 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:54.162323 | orchestrator | 2025-09-19 00:55:54 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:54.164325 | orchestrator | 2025-09-19 00:55:54 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:54.164511 | orchestrator | 2025-09-19 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:55:57.214334 | orchestrator | 2025-09-19 00:55:57 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:55:57.216063 | orchestrator | 2025-09-19 00:55:57 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:55:57.216175 | orchestrator | 2025-09-19 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:00.260291 | orchestrator | 2025-09-19 00:56:00 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:00.262909 | orchestrator | 2025-09-19 00:56:00 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:00.263281 | orchestrator | 2025-09-19 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:03.304809 | orchestrator | 2025-09-19 00:56:03 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:03.306998 | orchestrator | 2025-09-19 00:56:03 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:03.307034 | orchestrator | 2025-09-19 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:06.347900 | orchestrator | 2025-09-19 00:56:06 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:06.349060 | orchestrator | 2025-09-19 00:56:06 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:06.349083 | orchestrator | 2025-09-19 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:09.404896 | orchestrator | 2025-09-19 00:56:09 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:09.406341 | orchestrator | 2025-09-19 00:56:09 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:09.406379 | orchestrator | 2025-09-19 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:12.458808 | orchestrator | 2025-09-19 00:56:12 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:12.461012 | orchestrator | 2025-09-19 00:56:12 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:12.461215 | orchestrator | 2025-09-19 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:15.516970 | orchestrator | 2025-09-19 00:56:15 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:15.517418 | orchestrator | 2025-09-19 00:56:15 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:15.517483 | orchestrator | 2025-09-19 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:18.575247 | orchestrator | 2025-09-19 00:56:18 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:18.575353 | orchestrator | 2025-09-19 00:56:18 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state STARTED 2025-09-19 00:56:18.575367 | orchestrator | 2025-09-19 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:21.603793 | orchestrator | 2025-09-19 00:56:21 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:21.605188 | orchestrator | 2025-09-19 00:56:21 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state STARTED 2025-09-19 00:56:21.605582 | orchestrator | 2025-09-19 00:56:21 | INFO  | Task bbb451a3-ad7b-4c71-ac30-a1c92f556a49 is in state SUCCESS 2025-09-19 00:56:21.607240 | orchestrator | 2025-09-19 00:56:21.607276 | orchestrator | 2025-09-19 00:56:21.607285 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-19 00:56:21.607293 | orchestrator | 2025-09-19 00:56:21.607300 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-19 00:56:21.607307 | orchestrator | Friday 19 September 2025 00:54:57 +0000 (0:00:00.154) 0:00:00.154 ****** 2025-09-19 00:56:21.607315 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-19 00:56:21.607342 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607357 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 00:56:21.607365 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607372 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-19 00:56:21.607378 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-19 00:56:21.607384 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-19 00:56:21.607390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-19 00:56:21.607397 | orchestrator | 2025-09-19 00:56:21.607405 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-19 00:56:21.607412 | orchestrator | Friday 19 September 2025 00:55:01 +0000 (0:00:04.194) 0:00:04.348 ****** 2025-09-19 00:56:21.607420 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 00:56:21.607427 | orchestrator | 2025-09-19 00:56:21.607434 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-19 00:56:21.607441 | orchestrator | Friday 19 September 2025 00:55:02 +0000 (0:00:00.995) 0:00:05.343 ****** 2025-09-19 00:56:21.607448 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-19 00:56:21.607455 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607474 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607482 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 00:56:21.607489 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607496 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-19 00:56:21.607560 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-19 00:56:21.607637 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-19 00:56:21.607645 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-19 00:56:21.607653 | orchestrator | 2025-09-19 00:56:21.607659 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-19 00:56:21.607666 | orchestrator | Friday 19 September 2025 00:55:14 +0000 (0:00:12.567) 0:00:17.910 ****** 2025-09-19 00:56:21.607673 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-19 00:56:21.607904 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607917 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607924 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 00:56:21.607930 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 00:56:21.607936 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-19 00:56:21.607943 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-19 00:56:21.607950 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-19 00:56:21.607957 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-19 00:56:21.607964 | orchestrator | 2025-09-19 00:56:21.607972 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:56:21.607988 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:56:21.607997 | orchestrator | 2025-09-19 00:56:21.608004 | orchestrator | 2025-09-19 00:56:21.608011 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:56:21.608018 | orchestrator | Friday 19 September 2025 00:55:21 +0000 (0:00:06.739) 0:00:24.649 ****** 2025-09-19 00:56:21.608025 | orchestrator | =============================================================================== 2025-09-19 00:56:21.608032 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.57s 2025-09-19 00:56:21.608039 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.74s 2025-09-19 00:56:21.608046 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.19s 2025-09-19 00:56:21.608053 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2025-09-19 00:56:21.608060 | orchestrator | 2025-09-19 00:56:21.608067 | orchestrator | 2025-09-19 00:56:21.608075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:56:21.608082 | orchestrator | 2025-09-19 00:56:21.608113 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:56:21.608120 | orchestrator | Friday 19 September 2025 00:53:42 +0000 (0:00:00.252) 0:00:00.252 ****** 2025-09-19 00:56:21.608126 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.608132 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:56:21.608140 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:56:21.608147 | orchestrator | 2025-09-19 00:56:21.608154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:56:21.608161 | orchestrator | Friday 19 September 2025 00:53:42 +0000 (0:00:00.294) 0:00:00.547 ****** 2025-09-19 00:56:21.608168 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 00:56:21.608176 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 00:56:21.608183 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 00:56:21.608190 | orchestrator | 2025-09-19 00:56:21.608197 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-19 00:56:21.608204 | orchestrator | 2025-09-19 00:56:21.608210 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 00:56:21.608217 | orchestrator | Friday 19 September 2025 00:53:43 +0000 (0:00:00.425) 0:00:00.972 ****** 2025-09-19 00:56:21.608224 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:56:21.608231 | orchestrator | 2025-09-19 00:56:21.608238 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-19 00:56:21.608245 | orchestrator | Friday 19 September 2025 00:53:43 +0000 (0:00:00.537) 0:00:01.510 ****** 2025-09-19 00:56:21.608263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.608274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.608308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.608319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608377 | orchestrator | 2025-09-19 00:56:21.608385 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-19 00:56:21.608393 | orchestrator | Friday 19 September 2025 00:53:45 +0000 (0:00:01.919) 0:00:03.429 ****** 2025-09-19 00:56:21.608404 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-19 00:56:21.608411 | orchestrator | 2025-09-19 00:56:21.608418 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-19 00:56:21.608426 | orchestrator | Friday 19 September 2025 00:53:46 +0000 (0:00:00.855) 0:00:04.285 ****** 2025-09-19 00:56:21.608433 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.608440 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:56:21.608447 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:56:21.608454 | orchestrator | 2025-09-19 00:56:21.608461 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-19 00:56:21.608469 | orchestrator | Friday 19 September 2025 00:53:47 +0000 (0:00:00.458) 0:00:04.743 ****** 2025-09-19 00:56:21.608477 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 00:56:21.608484 | orchestrator | 2025-09-19 00:56:21.608492 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 00:56:21.608500 | orchestrator | Friday 19 September 2025 00:53:47 +0000 (0:00:00.670) 0:00:05.413 ****** 2025-09-19 00:56:21.608507 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:56:21.608515 | orchestrator | 2025-09-19 00:56:21.608522 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-19 00:56:21.608530 | orchestrator | Friday 19 September 2025 00:53:48 +0000 (0:00:00.498) 0:00:05.912 ****** 2025-09-19 00:56:21.608543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.608567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.608576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.608592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.608683 | orchestrator | 2025-09-19 00:56:21.608691 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-19 00:56:21.608698 | orchestrator | Friday 19 September 2025 00:53:51 +0000 (0:00:03.507) 0:00:09.420 ****** 2025-09-19 00:56:21.608711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:56:21.608718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.608734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:56:21.608741 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.608747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:56:21.608755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.608766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:56:21.608772 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.608777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:56:21.608797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.608805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:56:21.608811 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.608817 | orchestrator | 2025-09-19 00:56:21.608822 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-19 00:56:21.608828 | orchestrator | Friday 19 September 2025 00:53:52 +0000 (0:00:00.576) 0:00:09.996 ****** 2025-09-19 00:56:21.608834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:56:21.608909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.608915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:56:21.608928 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.608939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:56:21.608945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 00:56:21.608952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.608963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.608970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:56:21.608982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.608988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 00:56:21.608994 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.609000 | orchestrator | 2025-09-19 00:56:21.609005 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-19 00:56:21.609014 | orchestrator | Friday 19 September 2025 00:53:53 +0000 (0:00:00.750) 0:00:10.747 ****** 2025-09-19 00:56:21.609021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609105 | orchestrator | 2025-09-19 00:56:21.609111 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-19 00:56:21.609117 | orchestrator | Friday 19 September 2025 00:53:56 +0000 (0:00:03.528) 0:00:14.275 ****** 2025-09-19 00:56:21.609123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.609140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.609162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.609178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609196 | orchestrator | 2025-09-19 00:56:21.609202 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-19 00:56:21.609208 | orchestrator | Friday 19 September 2025 00:54:01 +0000 (0:00:05.223) 0:00:19.499 ****** 2025-09-19 00:56:21.609214 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.609224 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:56:21.609231 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:56:21.609237 | orchestrator | 2025-09-19 00:56:21.609244 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-19 00:56:21.609250 | orchestrator | Friday 19 September 2025 00:54:03 +0000 (0:00:01.401) 0:00:20.900 ****** 2025-09-19 00:56:21.609256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.609261 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.609267 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.609272 | orchestrator | 2025-09-19 00:56:21.609278 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-19 00:56:21.609287 | orchestrator | Friday 19 September 2025 00:54:03 +0000 (0:00:00.508) 0:00:21.409 ****** 2025-09-19 00:56:21.609293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.609299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.609305 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.609311 | orchestrator | 2025-09-19 00:56:21.609317 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-19 00:56:21.609323 | orchestrator | Friday 19 September 2025 00:54:04 +0000 (0:00:00.361) 0:00:21.771 ****** 2025-09-19 00:56:21.609328 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.609334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.609340 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.609346 | orchestrator | 2025-09-19 00:56:21.609352 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-19 00:56:21.609358 | orchestrator | Friday 19 September 2025 00:54:04 +0000 (0:00:00.497) 0:00:22.268 ****** 2025-09-19 00:56:21.609365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.609383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.609406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.609412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 00:56:21.609421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.609448 | orchestrator | 2025-09-19 00:56:21.609454 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 00:56:21.609461 | orchestrator | Friday 19 September 2025 00:54:07 +0000 (0:00:02.570) 0:00:24.839 ****** 2025-09-19 00:56:21.609467 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.609473 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.609478 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.609484 | orchestrator | 2025-09-19 00:56:21.609490 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-19 00:56:21.609496 | orchestrator | Friday 19 September 2025 00:54:07 +0000 (0:00:00.310) 0:00:25.149 ****** 2025-09-19 00:56:21.609601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 00:56:21.609610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 00:56:21.609637 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 00:56:21.609645 | orchestrator | 2025-09-19 00:56:21.609661 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-19 00:56:21.609669 | orchestrator | Friday 19 September 2025 00:54:09 +0000 (0:00:01.761) 0:00:26.910 ****** 2025-09-19 00:56:21.609678 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 00:56:21.609685 | orchestrator | 2025-09-19 00:56:21.609690 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-19 00:56:21.609697 | orchestrator | Friday 19 September 2025 00:54:10 +0000 (0:00:01.299) 0:00:28.210 ****** 2025-09-19 00:56:21.609703 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.609709 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.609715 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.609721 | orchestrator | 2025-09-19 00:56:21.609728 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-19 00:56:21.609735 | orchestrator | Friday 19 September 2025 00:54:11 +0000 (0:00:00.529) 0:00:28.739 ****** 2025-09-19 00:56:21.609741 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 00:56:21.609748 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 00:56:21.609755 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 00:56:21.609761 | orchestrator | 2025-09-19 00:56:21.609768 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-19 00:56:21.609775 | orchestrator | Friday 19 September 2025 00:54:12 +0000 (0:00:00.958) 0:00:29.698 ****** 2025-09-19 00:56:21.609782 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.609788 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:56:21.609795 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:56:21.609802 | orchestrator | 2025-09-19 00:56:21.609808 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-19 00:56:21.609815 | orchestrator | Friday 19 September 2025 00:54:12 +0000 (0:00:00.298) 0:00:29.997 ****** 2025-09-19 00:56:21.609822 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 00:56:21.609829 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 00:56:21.609835 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 00:56:21.609850 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 00:56:21.609857 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 00:56:21.609869 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 00:56:21.609876 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 00:56:21.609883 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 00:56:21.609890 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 00:56:21.609896 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 00:56:21.609903 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 00:56:21.609910 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 00:56:21.609917 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 00:56:21.609924 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 00:56:21.609931 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 00:56:21.609937 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 00:56:21.609944 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 00:56:21.609950 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 00:56:21.609957 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 00:56:21.609964 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 00:56:21.609971 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 00:56:21.609978 | orchestrator | 2025-09-19 00:56:21.609985 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-19 00:56:21.609992 | orchestrator | Friday 19 September 2025 00:54:21 +0000 (0:00:09.036) 0:00:39.033 ****** 2025-09-19 00:56:21.609999 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 00:56:21.610005 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 00:56:21.610045 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 00:56:21.610054 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 00:56:21.610061 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 00:56:21.610069 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 00:56:21.610076 | orchestrator | 2025-09-19 00:56:21.610083 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-19 00:56:21.610095 | orchestrator | Friday 19 September 2025 00:54:23 +0000 (0:00:02.546) 0:00:41.580 ****** 2025-09-19 00:56:21.610103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.610120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.610128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 00:56:21.610135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.610147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.610154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 00:56:21.610164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.610174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.610181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 00:56:21.610188 | orchestrator | 2025-09-19 00:56:21.610194 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 00:56:21.610201 | orchestrator | Friday 19 September 2025 00:54:26 +0000 (0:00:02.315) 0:00:43.896 ****** 2025-09-19 00:56:21.610207 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.610219 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.610226 | orchestrator | 2025-09-19 00:56:21.610232 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-19 00:56:21.610238 | orchestrator | Friday 19 September 2025 00:54:26 +0000 (0:00:00.289) 0:00:44.185 ****** 2025-09-19 00:56:21.610245 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610251 | orchestrator | 2025-09-19 00:56:21.610258 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-19 00:56:21.610264 | orchestrator | Friday 19 September 2025 00:54:28 +0000 (0:00:02.314) 0:00:46.500 ****** 2025-09-19 00:56:21.610270 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610277 | orchestrator | 2025-09-19 00:56:21.610283 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-19 00:56:21.610289 | orchestrator | Friday 19 September 2025 00:54:31 +0000 (0:00:02.168) 0:00:48.669 ****** 2025-09-19 00:56:21.610295 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:56:21.610302 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:56:21.610308 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.610314 | orchestrator | 2025-09-19 00:56:21.610320 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-19 00:56:21.610331 | orchestrator | Friday 19 September 2025 00:54:32 +0000 (0:00:01.325) 0:00:49.994 ****** 2025-09-19 00:56:21.610337 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.610343 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:56:21.610350 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:56:21.610356 | orchestrator | 2025-09-19 00:56:21.610364 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-19 00:56:21.610371 | orchestrator | Friday 19 September 2025 00:54:32 +0000 (0:00:00.346) 0:00:50.340 ****** 2025-09-19 00:56:21.610377 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610383 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.610389 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.610395 | orchestrator | 2025-09-19 00:56:21.610401 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-19 00:56:21.610408 | orchestrator | Friday 19 September 2025 00:54:33 +0000 (0:00:00.369) 0:00:50.710 ****** 2025-09-19 00:56:21.610414 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610420 | orchestrator | 2025-09-19 00:56:21.610426 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-19 00:56:21.610432 | orchestrator | Friday 19 September 2025 00:54:46 +0000 (0:00:13.819) 0:01:04.529 ****** 2025-09-19 00:56:21.610439 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610445 | orchestrator | 2025-09-19 00:56:21.610451 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 00:56:21.610458 | orchestrator | Friday 19 September 2025 00:54:57 +0000 (0:00:10.193) 0:01:14.722 ****** 2025-09-19 00:56:21.610464 | orchestrator | 2025-09-19 00:56:21.610471 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 00:56:21.610477 | orchestrator | Friday 19 September 2025 00:54:57 +0000 (0:00:00.067) 0:01:14.790 ****** 2025-09-19 00:56:21.610483 | orchestrator | 2025-09-19 00:56:21.610490 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 00:56:21.610496 | orchestrator | Friday 19 September 2025 00:54:57 +0000 (0:00:00.246) 0:01:15.037 ****** 2025-09-19 00:56:21.610502 | orchestrator | 2025-09-19 00:56:21.610509 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-19 00:56:21.610515 | orchestrator | Friday 19 September 2025 00:54:57 +0000 (0:00:00.066) 0:01:15.104 ****** 2025-09-19 00:56:21.610521 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610528 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:56:21.610534 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:56:21.610540 | orchestrator | 2025-09-19 00:56:21.610546 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-19 00:56:21.610553 | orchestrator | Friday 19 September 2025 00:55:14 +0000 (0:00:16.775) 0:01:31.879 ****** 2025-09-19 00:56:21.610559 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610566 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:56:21.610572 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:56:21.610578 | orchestrator | 2025-09-19 00:56:21.610587 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-19 00:56:21.610594 | orchestrator | Friday 19 September 2025 00:55:24 +0000 (0:00:10.202) 0:01:42.082 ****** 2025-09-19 00:56:21.610600 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610606 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:56:21.610613 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:56:21.610655 | orchestrator | 2025-09-19 00:56:21.610661 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 00:56:21.610667 | orchestrator | Friday 19 September 2025 00:55:30 +0000 (0:00:06.042) 0:01:48.124 ****** 2025-09-19 00:56:21.610673 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:56:21.610679 | orchestrator | 2025-09-19 00:56:21.610685 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-19 00:56:21.610695 | orchestrator | Friday 19 September 2025 00:55:31 +0000 (0:00:00.756) 0:01:48.881 ****** 2025-09-19 00:56:21.610701 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.610707 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:56:21.610713 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:56:21.610718 | orchestrator | 2025-09-19 00:56:21.610724 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-19 00:56:21.610730 | orchestrator | Friday 19 September 2025 00:55:32 +0000 (0:00:00.811) 0:01:49.693 ****** 2025-09-19 00:56:21.610736 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:56:21.610742 | orchestrator | 2025-09-19 00:56:21.610748 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-19 00:56:21.610753 | orchestrator | Friday 19 September 2025 00:55:33 +0000 (0:00:01.751) 0:01:51.444 ****** 2025-09-19 00:56:21.610759 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-19 00:56:21.610765 | orchestrator | 2025-09-19 00:56:21.610771 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-19 00:56:21.610776 | orchestrator | Friday 19 September 2025 00:55:44 +0000 (0:00:11.172) 0:02:02.616 ****** 2025-09-19 00:56:21.610782 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-19 00:56:21.610788 | orchestrator | 2025-09-19 00:56:21.610794 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-19 00:56:21.610799 | orchestrator | Friday 19 September 2025 00:56:07 +0000 (0:00:22.462) 0:02:25.079 ****** 2025-09-19 00:56:21.610805 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-19 00:56:21.610811 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-19 00:56:21.610817 | orchestrator | 2025-09-19 00:56:21.610822 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-19 00:56:21.610829 | orchestrator | Friday 19 September 2025 00:56:13 +0000 (0:00:06.409) 0:02:31.488 ****** 2025-09-19 00:56:21.610834 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610840 | orchestrator | 2025-09-19 00:56:21.610846 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-19 00:56:21.610852 | orchestrator | Friday 19 September 2025 00:56:13 +0000 (0:00:00.128) 0:02:31.616 ****** 2025-09-19 00:56:21.610858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610864 | orchestrator | 2025-09-19 00:56:21.610870 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-19 00:56:21.610876 | orchestrator | Friday 19 September 2025 00:56:14 +0000 (0:00:00.561) 0:02:32.178 ****** 2025-09-19 00:56:21.610883 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610889 | orchestrator | 2025-09-19 00:56:21.610900 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-19 00:56:21.610906 | orchestrator | Friday 19 September 2025 00:56:14 +0000 (0:00:00.137) 0:02:32.316 ****** 2025-09-19 00:56:21.610913 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610919 | orchestrator | 2025-09-19 00:56:21.610926 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-19 00:56:21.610933 | orchestrator | Friday 19 September 2025 00:56:15 +0000 (0:00:00.382) 0:02:32.699 ****** 2025-09-19 00:56:21.610940 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:56:21.610946 | orchestrator | 2025-09-19 00:56:21.610953 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 00:56:21.610960 | orchestrator | Friday 19 September 2025 00:56:18 +0000 (0:00:03.501) 0:02:36.201 ****** 2025-09-19 00:56:21.610966 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:56:21.610973 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:56:21.610999 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:56:21.611007 | orchestrator | 2025-09-19 00:56:21.611013 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:56:21.611035 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-19 00:56:21.611049 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 00:56:21.611056 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 00:56:21.611063 | orchestrator | 2025-09-19 00:56:21.611069 | orchestrator | 2025-09-19 00:56:21.611075 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:56:21.611082 | orchestrator | Friday 19 September 2025 00:56:19 +0000 (0:00:00.491) 0:02:36.692 ****** 2025-09-19 00:56:21.611089 | orchestrator | =============================================================================== 2025-09-19 00:56:21.611095 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.46s 2025-09-19 00:56:21.611102 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 16.78s 2025-09-19 00:56:21.611109 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.82s 2025-09-19 00:56:21.611119 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.17s 2025-09-19 00:56:21.611125 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.20s 2025-09-19 00:56:21.611131 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.19s 2025-09-19 00:56:21.611137 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.04s 2025-09-19 00:56:21.611143 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.41s 2025-09-19 00:56:21.611150 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.04s 2025-09-19 00:56:21.611156 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.22s 2025-09-19 00:56:21.611162 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.53s 2025-09-19 00:56:21.611167 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.51s 2025-09-19 00:56:21.611172 | orchestrator | keystone : Creating default user role ----------------------------------- 3.50s 2025-09-19 00:56:21.611178 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.57s 2025-09-19 00:56:21.611185 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.55s 2025-09-19 00:56:21.611192 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.32s 2025-09-19 00:56:21.611198 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.31s 2025-09-19 00:56:21.611206 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.17s 2025-09-19 00:56:21.611212 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.92s 2025-09-19 00:56:21.611219 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.76s 2025-09-19 00:56:21.611225 | orchestrator | 2025-09-19 00:56:21 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:21.611232 | orchestrator | 2025-09-19 00:56:21 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:21.611240 | orchestrator | 2025-09-19 00:56:21 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:21.611245 | orchestrator | 2025-09-19 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:24.639267 | orchestrator | 2025-09-19 00:56:24 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:24.639756 | orchestrator | 2025-09-19 00:56:24 | INFO  | Task d7a7b309-89b2-4165-9617-e3bf5b3b9bc9 is in state SUCCESS 2025-09-19 00:56:24.640202 | orchestrator | 2025-09-19 00:56:24 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:24.640912 | orchestrator | 2025-09-19 00:56:24 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:24.641569 | orchestrator | 2025-09-19 00:56:24 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:24.644497 | orchestrator | 2025-09-19 00:56:24 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:24.644534 | orchestrator | 2025-09-19 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:27.667708 | orchestrator | 2025-09-19 00:56:27 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:27.667888 | orchestrator | 2025-09-19 00:56:27 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:27.668566 | orchestrator | 2025-09-19 00:56:27 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:27.669188 | orchestrator | 2025-09-19 00:56:27 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:27.670178 | orchestrator | 2025-09-19 00:56:27 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:27.670204 | orchestrator | 2025-09-19 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:30.702195 | orchestrator | 2025-09-19 00:56:30 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:30.705499 | orchestrator | 2025-09-19 00:56:30 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:30.708724 | orchestrator | 2025-09-19 00:56:30 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:30.710495 | orchestrator | 2025-09-19 00:56:30 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:30.712321 | orchestrator | 2025-09-19 00:56:30 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:30.712570 | orchestrator | 2025-09-19 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:33.761318 | orchestrator | 2025-09-19 00:56:33 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:33.765950 | orchestrator | 2025-09-19 00:56:33 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:33.768088 | orchestrator | 2025-09-19 00:56:33 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:33.771053 | orchestrator | 2025-09-19 00:56:33 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:33.773557 | orchestrator | 2025-09-19 00:56:33 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:33.773836 | orchestrator | 2025-09-19 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:36.819207 | orchestrator | 2025-09-19 00:56:36 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:36.820810 | orchestrator | 2025-09-19 00:56:36 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:36.822758 | orchestrator | 2025-09-19 00:56:36 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:36.824098 | orchestrator | 2025-09-19 00:56:36 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:36.827010 | orchestrator | 2025-09-19 00:56:36 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:36.827054 | orchestrator | 2025-09-19 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:39.870810 | orchestrator | 2025-09-19 00:56:39 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:39.872124 | orchestrator | 2025-09-19 00:56:39 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:39.874146 | orchestrator | 2025-09-19 00:56:39 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:39.876104 | orchestrator | 2025-09-19 00:56:39 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:39.877551 | orchestrator | 2025-09-19 00:56:39 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:39.877845 | orchestrator | 2025-09-19 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:42.919417 | orchestrator | 2025-09-19 00:56:42 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:42.920519 | orchestrator | 2025-09-19 00:56:42 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:42.922073 | orchestrator | 2025-09-19 00:56:42 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:42.923524 | orchestrator | 2025-09-19 00:56:42 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:42.924747 | orchestrator | 2025-09-19 00:56:42 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:42.924783 | orchestrator | 2025-09-19 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:45.968390 | orchestrator | 2025-09-19 00:56:45 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:45.970794 | orchestrator | 2025-09-19 00:56:45 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:45.973738 | orchestrator | 2025-09-19 00:56:45 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:45.976313 | orchestrator | 2025-09-19 00:56:45 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:45.978572 | orchestrator | 2025-09-19 00:56:45 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:45.979300 | orchestrator | 2025-09-19 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:49.025059 | orchestrator | 2025-09-19 00:56:49 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:49.026630 | orchestrator | 2025-09-19 00:56:49 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:49.027925 | orchestrator | 2025-09-19 00:56:49 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:49.029539 | orchestrator | 2025-09-19 00:56:49 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:49.030586 | orchestrator | 2025-09-19 00:56:49 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:49.030870 | orchestrator | 2025-09-19 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:52.068665 | orchestrator | 2025-09-19 00:56:52 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:52.069250 | orchestrator | 2025-09-19 00:56:52 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:52.070707 | orchestrator | 2025-09-19 00:56:52 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:52.071427 | orchestrator | 2025-09-19 00:56:52 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:52.072442 | orchestrator | 2025-09-19 00:56:52 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:52.072498 | orchestrator | 2025-09-19 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:55.115993 | orchestrator | 2025-09-19 00:56:55 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:55.116079 | orchestrator | 2025-09-19 00:56:55 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:55.116093 | orchestrator | 2025-09-19 00:56:55 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:55.116110 | orchestrator | 2025-09-19 00:56:55 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:55.117126 | orchestrator | 2025-09-19 00:56:55 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:55.117147 | orchestrator | 2025-09-19 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:56:58.159956 | orchestrator | 2025-09-19 00:56:58 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:56:58.165124 | orchestrator | 2025-09-19 00:56:58 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:56:58.168029 | orchestrator | 2025-09-19 00:56:58 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:56:58.170224 | orchestrator | 2025-09-19 00:56:58 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:56:58.172234 | orchestrator | 2025-09-19 00:56:58 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:56:58.172498 | orchestrator | 2025-09-19 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:01.223356 | orchestrator | 2025-09-19 00:57:01 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:01.224754 | orchestrator | 2025-09-19 00:57:01 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:01.226378 | orchestrator | 2025-09-19 00:57:01 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:01.228289 | orchestrator | 2025-09-19 00:57:01 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:01.229741 | orchestrator | 2025-09-19 00:57:01 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:01.229953 | orchestrator | 2025-09-19 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:04.266535 | orchestrator | 2025-09-19 00:57:04 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:04.266904 | orchestrator | 2025-09-19 00:57:04 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:04.269072 | orchestrator | 2025-09-19 00:57:04 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:04.270632 | orchestrator | 2025-09-19 00:57:04 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:04.271914 | orchestrator | 2025-09-19 00:57:04 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:04.272008 | orchestrator | 2025-09-19 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:07.307187 | orchestrator | 2025-09-19 00:57:07 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:07.307275 | orchestrator | 2025-09-19 00:57:07 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:07.307412 | orchestrator | 2025-09-19 00:57:07 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:07.308458 | orchestrator | 2025-09-19 00:57:07 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:07.309631 | orchestrator | 2025-09-19 00:57:07 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:07.309911 | orchestrator | 2025-09-19 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:10.339356 | orchestrator | 2025-09-19 00:57:10 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:10.341111 | orchestrator | 2025-09-19 00:57:10 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:10.342139 | orchestrator | 2025-09-19 00:57:10 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:10.344199 | orchestrator | 2025-09-19 00:57:10 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:10.344254 | orchestrator | 2025-09-19 00:57:10 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:10.344267 | orchestrator | 2025-09-19 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:13.367089 | orchestrator | 2025-09-19 00:57:13 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:13.367534 | orchestrator | 2025-09-19 00:57:13 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:13.368127 | orchestrator | 2025-09-19 00:57:13 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:13.369331 | orchestrator | 2025-09-19 00:57:13 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:13.370144 | orchestrator | 2025-09-19 00:57:13 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:13.370179 | orchestrator | 2025-09-19 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:16.398294 | orchestrator | 2025-09-19 00:57:16 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:16.398382 | orchestrator | 2025-09-19 00:57:16 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:16.398995 | orchestrator | 2025-09-19 00:57:16 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:16.399690 | orchestrator | 2025-09-19 00:57:16 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:16.400192 | orchestrator | 2025-09-19 00:57:16 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:16.400215 | orchestrator | 2025-09-19 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:19.423270 | orchestrator | 2025-09-19 00:57:19 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:19.423706 | orchestrator | 2025-09-19 00:57:19 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:19.424867 | orchestrator | 2025-09-19 00:57:19 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:19.425875 | orchestrator | 2025-09-19 00:57:19 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:19.427549 | orchestrator | 2025-09-19 00:57:19 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:19.427629 | orchestrator | 2025-09-19 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:22.450280 | orchestrator | 2025-09-19 00:57:22 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:22.451163 | orchestrator | 2025-09-19 00:57:22 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:22.452081 | orchestrator | 2025-09-19 00:57:22 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:22.452727 | orchestrator | 2025-09-19 00:57:22 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:22.453419 | orchestrator | 2025-09-19 00:57:22 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:22.453885 | orchestrator | 2025-09-19 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:25.479764 | orchestrator | 2025-09-19 00:57:25 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:25.479851 | orchestrator | 2025-09-19 00:57:25 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:25.479866 | orchestrator | 2025-09-19 00:57:25 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:25.479877 | orchestrator | 2025-09-19 00:57:25 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:25.479906 | orchestrator | 2025-09-19 00:57:25 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:25.479919 | orchestrator | 2025-09-19 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:28.572072 | orchestrator | 2025-09-19 00:57:28 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:28.572445 | orchestrator | 2025-09-19 00:57:28 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:28.573380 | orchestrator | 2025-09-19 00:57:28 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:28.574352 | orchestrator | 2025-09-19 00:57:28 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:28.575232 | orchestrator | 2025-09-19 00:57:28 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:28.575381 | orchestrator | 2025-09-19 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:31.611508 | orchestrator | 2025-09-19 00:57:31 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:31.611749 | orchestrator | 2025-09-19 00:57:31 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:31.612410 | orchestrator | 2025-09-19 00:57:31 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:31.613246 | orchestrator | 2025-09-19 00:57:31 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:31.613838 | orchestrator | 2025-09-19 00:57:31 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:31.613913 | orchestrator | 2025-09-19 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:34.638169 | orchestrator | 2025-09-19 00:57:34 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:34.638711 | orchestrator | 2025-09-19 00:57:34 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:34.639352 | orchestrator | 2025-09-19 00:57:34 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:34.639988 | orchestrator | 2025-09-19 00:57:34 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:34.640787 | orchestrator | 2025-09-19 00:57:34 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:34.640817 | orchestrator | 2025-09-19 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:37.676111 | orchestrator | 2025-09-19 00:57:37 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:37.676203 | orchestrator | 2025-09-19 00:57:37 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:37.676788 | orchestrator | 2025-09-19 00:57:37 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:37.677680 | orchestrator | 2025-09-19 00:57:37 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:37.678148 | orchestrator | 2025-09-19 00:57:37 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:37.678181 | orchestrator | 2025-09-19 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:40.701631 | orchestrator | 2025-09-19 00:57:40 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:40.701757 | orchestrator | 2025-09-19 00:57:40 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:40.702237 | orchestrator | 2025-09-19 00:57:40 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:40.702669 | orchestrator | 2025-09-19 00:57:40 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:40.703183 | orchestrator | 2025-09-19 00:57:40 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:40.703211 | orchestrator | 2025-09-19 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:43.731108 | orchestrator | 2025-09-19 00:57:43 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:43.731599 | orchestrator | 2025-09-19 00:57:43 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:43.732071 | orchestrator | 2025-09-19 00:57:43 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:43.732729 | orchestrator | 2025-09-19 00:57:43 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:43.734502 | orchestrator | 2025-09-19 00:57:43 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:43.734610 | orchestrator | 2025-09-19 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:46.760749 | orchestrator | 2025-09-19 00:57:46 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:46.761672 | orchestrator | 2025-09-19 00:57:46 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:46.762303 | orchestrator | 2025-09-19 00:57:46 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:46.763072 | orchestrator | 2025-09-19 00:57:46 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:46.763732 | orchestrator | 2025-09-19 00:57:46 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:46.763756 | orchestrator | 2025-09-19 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:49.797022 | orchestrator | 2025-09-19 00:57:49 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:49.797424 | orchestrator | 2025-09-19 00:57:49 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:49.797913 | orchestrator | 2025-09-19 00:57:49 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:49.798735 | orchestrator | 2025-09-19 00:57:49 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:49.799197 | orchestrator | 2025-09-19 00:57:49 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:49.799220 | orchestrator | 2025-09-19 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:52.842709 | orchestrator | 2025-09-19 00:57:52 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:52.845599 | orchestrator | 2025-09-19 00:57:52 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:52.847236 | orchestrator | 2025-09-19 00:57:52 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:52.848426 | orchestrator | 2025-09-19 00:57:52 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:52.849971 | orchestrator | 2025-09-19 00:57:52 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:52.851379 | orchestrator | 2025-09-19 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:55.882638 | orchestrator | 2025-09-19 00:57:55 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:55.883029 | orchestrator | 2025-09-19 00:57:55 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:55.885580 | orchestrator | 2025-09-19 00:57:55 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:55.886893 | orchestrator | 2025-09-19 00:57:55 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:55.887702 | orchestrator | 2025-09-19 00:57:55 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:55.887735 | orchestrator | 2025-09-19 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:57:58.916405 | orchestrator | 2025-09-19 00:57:58 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:57:58.918184 | orchestrator | 2025-09-19 00:57:58 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state STARTED 2025-09-19 00:57:58.920308 | orchestrator | 2025-09-19 00:57:58 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:57:58.922359 | orchestrator | 2025-09-19 00:57:58 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:57:58.924463 | orchestrator | 2025-09-19 00:57:58 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:57:58.924653 | orchestrator | 2025-09-19 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:01.945833 | orchestrator | 2025-09-19 00:58:01 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:01.946211 | orchestrator | 2025-09-19 00:58:01 | INFO  | Task d2b477e6-9d12-4381-a7ea-be6610cae0ae is in state SUCCESS 2025-09-19 00:58:01.946776 | orchestrator | 2025-09-19 00:58:01 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:01.947242 | orchestrator | 2025-09-19 00:58:01.947272 | orchestrator | 2025-09-19 00:58:01.947312 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-19 00:58:01.947341 | orchestrator | 2025-09-19 00:58:01.947361 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-19 00:58:01.947380 | orchestrator | Friday 19 September 2025 00:55:26 +0000 (0:00:00.232) 0:00:00.232 ****** 2025-09-19 00:58:01.947401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-19 00:58:01.947423 | orchestrator | 2025-09-19 00:58:01.947444 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-19 00:58:01.947620 | orchestrator | Friday 19 September 2025 00:55:26 +0000 (0:00:00.251) 0:00:00.483 ****** 2025-09-19 00:58:01.947648 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-19 00:58:01.947660 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-19 00:58:01.947671 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-19 00:58:01.947682 | orchestrator | 2025-09-19 00:58:01.947694 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-19 00:58:01.947706 | orchestrator | Friday 19 September 2025 00:55:27 +0000 (0:00:01.306) 0:00:01.789 ****** 2025-09-19 00:58:01.947717 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-19 00:58:01.947728 | orchestrator | 2025-09-19 00:58:01.947739 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-19 00:58:01.947749 | orchestrator | Friday 19 September 2025 00:55:28 +0000 (0:00:01.132) 0:00:02.922 ****** 2025-09-19 00:58:01.947760 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.947771 | orchestrator | 2025-09-19 00:58:01.947782 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-19 00:58:01.947793 | orchestrator | Friday 19 September 2025 00:55:29 +0000 (0:00:01.016) 0:00:03.938 ****** 2025-09-19 00:58:01.947804 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.947814 | orchestrator | 2025-09-19 00:58:01.947825 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-19 00:58:01.947836 | orchestrator | Friday 19 September 2025 00:55:30 +0000 (0:00:00.904) 0:00:04.842 ****** 2025-09-19 00:58:01.947847 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-19 00:58:01.947859 | orchestrator | ok: [testbed-manager] 2025-09-19 00:58:01.947872 | orchestrator | 2025-09-19 00:58:01.947885 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-19 00:58:01.947897 | orchestrator | Friday 19 September 2025 00:56:11 +0000 (0:00:41.303) 0:00:46.146 ****** 2025-09-19 00:58:01.947909 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-19 00:58:01.947921 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-19 00:58:01.947934 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-19 00:58:01.947946 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-19 00:58:01.947958 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-19 00:58:01.947971 | orchestrator | 2025-09-19 00:58:01.947983 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-19 00:58:01.947995 | orchestrator | Friday 19 September 2025 00:56:16 +0000 (0:00:04.031) 0:00:50.178 ****** 2025-09-19 00:58:01.948006 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-19 00:58:01.948017 | orchestrator | 2025-09-19 00:58:01.948027 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-19 00:58:01.948038 | orchestrator | Friday 19 September 2025 00:56:16 +0000 (0:00:00.492) 0:00:50.671 ****** 2025-09-19 00:58:01.948049 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:58:01.948059 | orchestrator | 2025-09-19 00:58:01.948070 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-19 00:58:01.948080 | orchestrator | Friday 19 September 2025 00:56:16 +0000 (0:00:00.119) 0:00:50.790 ****** 2025-09-19 00:58:01.948091 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:58:01.948101 | orchestrator | 2025-09-19 00:58:01.948112 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-19 00:58:01.948131 | orchestrator | Friday 19 September 2025 00:56:16 +0000 (0:00:00.331) 0:00:51.122 ****** 2025-09-19 00:58:01.948199 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948221 | orchestrator | 2025-09-19 00:58:01.948242 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-19 00:58:01.948325 | orchestrator | Friday 19 September 2025 00:56:18 +0000 (0:00:01.797) 0:00:52.920 ****** 2025-09-19 00:58:01.948365 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948386 | orchestrator | 2025-09-19 00:58:01.948398 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-19 00:58:01.948409 | orchestrator | Friday 19 September 2025 00:56:19 +0000 (0:00:00.760) 0:00:53.681 ****** 2025-09-19 00:58:01.948420 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948430 | orchestrator | 2025-09-19 00:58:01.948441 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-19 00:58:01.948451 | orchestrator | Friday 19 September 2025 00:56:20 +0000 (0:00:00.786) 0:00:54.467 ****** 2025-09-19 00:58:01.948462 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-19 00:58:01.948473 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-19 00:58:01.948484 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-19 00:58:01.948494 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-19 00:58:01.948505 | orchestrator | 2025-09-19 00:58:01.948515 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:58:01.948551 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 00:58:01.948564 | orchestrator | 2025-09-19 00:58:01.948575 | orchestrator | 2025-09-19 00:58:01.948600 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:58:01.948619 | orchestrator | Friday 19 September 2025 00:56:21 +0000 (0:00:01.561) 0:00:56.029 ****** 2025-09-19 00:58:01.948630 | orchestrator | =============================================================================== 2025-09-19 00:58:01.948641 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.30s 2025-09-19 00:58:01.948651 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.03s 2025-09-19 00:58:01.948661 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.80s 2025-09-19 00:58:01.948672 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.56s 2025-09-19 00:58:01.948682 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.31s 2025-09-19 00:58:01.948693 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2025-09-19 00:58:01.948703 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2025-09-19 00:58:01.948714 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2025-09-19 00:58:01.948724 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.79s 2025-09-19 00:58:01.948735 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2025-09-19 00:58:01.948745 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-09-19 00:58:01.948756 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2025-09-19 00:58:01.948766 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2025-09-19 00:58:01.948777 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-09-19 00:58:01.948787 | orchestrator | 2025-09-19 00:58:01.948798 | orchestrator | 2025-09-19 00:58:01.948809 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-19 00:58:01.948819 | orchestrator | 2025-09-19 00:58:01.948829 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-19 00:58:01.948840 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.229) 0:00:00.229 ****** 2025-09-19 00:58:01.948850 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948861 | orchestrator | 2025-09-19 00:58:01.948872 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-19 00:58:01.948883 | orchestrator | Friday 19 September 2025 00:56:27 +0000 (0:00:01.147) 0:00:01.377 ****** 2025-09-19 00:58:01.948893 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948904 | orchestrator | 2025-09-19 00:58:01.948914 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-19 00:58:01.948933 | orchestrator | Friday 19 September 2025 00:56:28 +0000 (0:00:00.879) 0:00:02.256 ****** 2025-09-19 00:58:01.948943 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948954 | orchestrator | 2025-09-19 00:58:01.948965 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-19 00:58:01.948975 | orchestrator | Friday 19 September 2025 00:56:28 +0000 (0:00:00.820) 0:00:03.076 ****** 2025-09-19 00:58:01.948986 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.948996 | orchestrator | 2025-09-19 00:58:01.949007 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-19 00:58:01.949017 | orchestrator | Friday 19 September 2025 00:56:29 +0000 (0:00:01.023) 0:00:04.100 ****** 2025-09-19 00:58:01.949028 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.949039 | orchestrator | 2025-09-19 00:58:01.949049 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-19 00:58:01.949060 | orchestrator | Friday 19 September 2025 00:56:30 +0000 (0:00:00.894) 0:00:04.994 ****** 2025-09-19 00:58:01.949070 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.949081 | orchestrator | 2025-09-19 00:58:01.949091 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-19 00:58:01.949102 | orchestrator | Friday 19 September 2025 00:56:31 +0000 (0:00:00.983) 0:00:05.977 ****** 2025-09-19 00:58:01.949112 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.949123 | orchestrator | 2025-09-19 00:58:01.949134 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-19 00:58:01.949144 | orchestrator | Friday 19 September 2025 00:56:32 +0000 (0:00:01.118) 0:00:07.096 ****** 2025-09-19 00:58:01.949155 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.949165 | orchestrator | 2025-09-19 00:58:01.949176 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-19 00:58:01.949186 | orchestrator | Friday 19 September 2025 00:56:34 +0000 (0:00:01.227) 0:00:08.324 ****** 2025-09-19 00:58:01.949197 | orchestrator | changed: [testbed-manager] 2025-09-19 00:58:01.949208 | orchestrator | 2025-09-19 00:58:01.949218 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-19 00:58:01.949229 | orchestrator | Friday 19 September 2025 00:57:35 +0000 (0:01:01.589) 0:01:09.913 ****** 2025-09-19 00:58:01.949239 | orchestrator | skipping: [testbed-manager] 2025-09-19 00:58:01.949250 | orchestrator | 2025-09-19 00:58:01.949261 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 00:58:01.949271 | orchestrator | 2025-09-19 00:58:01.949282 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 00:58:01.949293 | orchestrator | Friday 19 September 2025 00:57:35 +0000 (0:00:00.148) 0:01:10.061 ****** 2025-09-19 00:58:01.949303 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:01.949314 | orchestrator | 2025-09-19 00:58:01.949324 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 00:58:01.949335 | orchestrator | 2025-09-19 00:58:01.949345 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 00:58:01.949356 | orchestrator | Friday 19 September 2025 00:57:47 +0000 (0:00:11.693) 0:01:21.754 ****** 2025-09-19 00:58:01.949366 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:58:01.949377 | orchestrator | 2025-09-19 00:58:01.949388 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 00:58:01.949398 | orchestrator | 2025-09-19 00:58:01.949414 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 00:58:01.949430 | orchestrator | Friday 19 September 2025 00:57:48 +0000 (0:00:01.365) 0:01:23.119 ****** 2025-09-19 00:58:01.949441 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:58:01.949451 | orchestrator | 2025-09-19 00:58:01.949462 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:58:01.949473 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 00:58:01.949493 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:58:01.949504 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:58:01.949514 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 00:58:01.949541 | orchestrator | 2025-09-19 00:58:01.949553 | orchestrator | 2025-09-19 00:58:01.949563 | orchestrator | 2025-09-19 00:58:01.949574 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:58:01.949584 | orchestrator | Friday 19 September 2025 00:58:00 +0000 (0:00:11.207) 0:01:34.327 ****** 2025-09-19 00:58:01.949595 | orchestrator | =============================================================================== 2025-09-19 00:58:01.949605 | orchestrator | Create admin user ------------------------------------------------------ 61.59s 2025-09-19 00:58:01.949616 | orchestrator | Restart ceph manager service ------------------------------------------- 24.27s 2025-09-19 00:58:01.949627 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.23s 2025-09-19 00:58:01.949637 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.15s 2025-09-19 00:58:01.949648 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.12s 2025-09-19 00:58:01.949658 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.02s 2025-09-19 00:58:01.949669 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.98s 2025-09-19 00:58:01.949679 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.89s 2025-09-19 00:58:01.949690 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.88s 2025-09-19 00:58:01.949700 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.82s 2025-09-19 00:58:01.949711 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-09-19 00:58:01.949840 | orchestrator | 2025-09-19 00:58:01 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:01.949855 | orchestrator | 2025-09-19 00:58:01 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:01.949866 | orchestrator | 2025-09-19 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:04.973515 | orchestrator | 2025-09-19 00:58:04 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:04.973675 | orchestrator | 2025-09-19 00:58:04 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:04.974573 | orchestrator | 2025-09-19 00:58:04 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:04.974590 | orchestrator | 2025-09-19 00:58:04 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:04.974600 | orchestrator | 2025-09-19 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:08.015581 | orchestrator | 2025-09-19 00:58:08 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:08.016027 | orchestrator | 2025-09-19 00:58:08 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:08.017474 | orchestrator | 2025-09-19 00:58:08 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:08.018250 | orchestrator | 2025-09-19 00:58:08 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:08.018287 | orchestrator | 2025-09-19 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:11.049162 | orchestrator | 2025-09-19 00:58:11 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:11.051175 | orchestrator | 2025-09-19 00:58:11 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:11.052913 | orchestrator | 2025-09-19 00:58:11 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:11.055048 | orchestrator | 2025-09-19 00:58:11 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:11.055090 | orchestrator | 2025-09-19 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:14.077314 | orchestrator | 2025-09-19 00:58:14 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:14.077699 | orchestrator | 2025-09-19 00:58:14 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:14.078322 | orchestrator | 2025-09-19 00:58:14 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:14.078997 | orchestrator | 2025-09-19 00:58:14 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:14.079024 | orchestrator | 2025-09-19 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:17.108547 | orchestrator | 2025-09-19 00:58:17 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:17.108608 | orchestrator | 2025-09-19 00:58:17 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:17.109928 | orchestrator | 2025-09-19 00:58:17 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:17.109948 | orchestrator | 2025-09-19 00:58:17 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:17.109955 | orchestrator | 2025-09-19 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:20.141407 | orchestrator | 2025-09-19 00:58:20 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:20.142505 | orchestrator | 2025-09-19 00:58:20 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:20.144297 | orchestrator | 2025-09-19 00:58:20 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:20.145036 | orchestrator | 2025-09-19 00:58:20 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:20.145067 | orchestrator | 2025-09-19 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:23.183768 | orchestrator | 2025-09-19 00:58:23 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:23.183926 | orchestrator | 2025-09-19 00:58:23 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:23.184394 | orchestrator | 2025-09-19 00:58:23 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state STARTED 2025-09-19 00:58:23.185720 | orchestrator | 2025-09-19 00:58:23 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:23.185744 | orchestrator | 2025-09-19 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:26.209562 | orchestrator | 2025-09-19 00:58:26 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:26.212473 | orchestrator | 2025-09-19 00:58:26 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:26.212870 | orchestrator | 2025-09-19 00:58:26 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:26.214110 | orchestrator | 2025-09-19 00:58:26 | INFO  | Task 58ffaa40-5502-4f6d-9412-2b7012f8e882 is in state SUCCESS 2025-09-19 00:58:26.215601 | orchestrator | 2025-09-19 00:58:26.215630 | orchestrator | 2025-09-19 00:58:26.215641 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:58:26.215651 | orchestrator | 2025-09-19 00:58:26.215661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:58:26.215671 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.406) 0:00:00.406 ****** 2025-09-19 00:58:26.215681 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:58:26.215691 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:58:26.215701 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:58:26.215710 | orchestrator | 2025-09-19 00:58:26.215720 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:58:26.215729 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.448) 0:00:00.854 ****** 2025-09-19 00:58:26.215739 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-19 00:58:26.215749 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-19 00:58:26.215758 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-19 00:58:26.215768 | orchestrator | 2025-09-19 00:58:26.215777 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-19 00:58:26.215787 | orchestrator | 2025-09-19 00:58:26.215797 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 00:58:26.215806 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.460) 0:00:01.315 ****** 2025-09-19 00:58:26.215816 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:58:26.215890 | orchestrator | 2025-09-19 00:58:26.215901 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-19 00:58:26.215923 | orchestrator | Friday 19 September 2025 00:56:26 +0000 (0:00:00.492) 0:00:01.807 ****** 2025-09-19 00:58:26.215933 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-19 00:58:26.215942 | orchestrator | 2025-09-19 00:58:26.215952 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-19 00:58:26.215973 | orchestrator | Friday 19 September 2025 00:56:29 +0000 (0:00:03.567) 0:00:05.374 ****** 2025-09-19 00:58:26.215983 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-19 00:58:26.215993 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-19 00:58:26.216002 | orchestrator | 2025-09-19 00:58:26.216012 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-19 00:58:26.216021 | orchestrator | Friday 19 September 2025 00:56:36 +0000 (0:00:06.878) 0:00:12.253 ****** 2025-09-19 00:58:26.216031 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-19 00:58:26.216040 | orchestrator | 2025-09-19 00:58:26.216050 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-19 00:58:26.216059 | orchestrator | Friday 19 September 2025 00:56:40 +0000 (0:00:03.687) 0:00:15.940 ****** 2025-09-19 00:58:26.216069 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 00:58:26.216078 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-19 00:58:26.216088 | orchestrator | 2025-09-19 00:58:26.216097 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-19 00:58:26.216107 | orchestrator | Friday 19 September 2025 00:56:44 +0000 (0:00:03.891) 0:00:19.832 ****** 2025-09-19 00:58:26.216116 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 00:58:26.216126 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-19 00:58:26.216136 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-19 00:58:26.216145 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-19 00:58:26.216154 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-19 00:58:26.216164 | orchestrator | 2025-09-19 00:58:26.216185 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-19 00:58:26.216195 | orchestrator | Friday 19 September 2025 00:57:01 +0000 (0:00:16.874) 0:00:36.706 ****** 2025-09-19 00:58:26.216206 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-19 00:58:26.216217 | orchestrator | 2025-09-19 00:58:26.216229 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-19 00:58:26.216240 | orchestrator | Friday 19 September 2025 00:57:05 +0000 (0:00:04.697) 0:00:41.404 ****** 2025-09-19 00:58:26.216253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.216280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.216298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.216310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216402 | orchestrator | 2025-09-19 00:58:26.216413 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-19 00:58:26.216424 | orchestrator | Friday 19 September 2025 00:57:08 +0000 (0:00:02.341) 0:00:43.745 ****** 2025-09-19 00:58:26.216436 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-19 00:58:26.216446 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-19 00:58:26.216457 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-19 00:58:26.216467 | orchestrator | 2025-09-19 00:58:26.216478 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-19 00:58:26.216488 | orchestrator | Friday 19 September 2025 00:57:09 +0000 (0:00:01.583) 0:00:45.328 ****** 2025-09-19 00:58:26.216522 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.216534 | orchestrator | 2025-09-19 00:58:26.216546 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-19 00:58:26.216558 | orchestrator | Friday 19 September 2025 00:57:10 +0000 (0:00:00.116) 0:00:45.445 ****** 2025-09-19 00:58:26.216568 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.216578 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:58:26.216588 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:58:26.216598 | orchestrator | 2025-09-19 00:58:26.216609 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 00:58:26.216619 | orchestrator | Friday 19 September 2025 00:57:10 +0000 (0:00:00.675) 0:00:46.120 ****** 2025-09-19 00:58:26.216629 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:58:26.216639 | orchestrator | 2025-09-19 00:58:26.216650 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-19 00:58:26.216660 | orchestrator | Friday 19 September 2025 00:57:11 +0000 (0:00:00.706) 0:00:46.827 ****** 2025-09-19 00:58:26.216671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.216690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.216705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.216722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.216794 | orchestrator | 2025-09-19 00:58:26.216805 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-19 00:58:26.216822 | orchestrator | Friday 19 September 2025 00:57:15 +0000 (0:00:03.628) 0:00:50.455 ****** 2025-09-19 00:58:26.216837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.216848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.216860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.216871 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.216889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.216900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.216922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.216933 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:58:26.216944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.216956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.216967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.216978 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:58:26.216988 | orchestrator | 2025-09-19 00:58:26.217005 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-19 00:58:26.217016 | orchestrator | Friday 19 September 2025 00:57:16 +0000 (0:00:01.765) 0:00:52.221 ****** 2025-09-19 00:58:26.217027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.217053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.217070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.217087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.217104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.217122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.217148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.217165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:58:26.217197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.217213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.217230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.217246 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:58:26.217262 | orchestrator | 2025-09-19 00:58:26.217279 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-19 00:58:26.217295 | orchestrator | Friday 19 September 2025 00:57:18 +0000 (0:00:01.689) 0:00:53.911 ****** 2025-09-19 00:58:26.217313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.217603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.217646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.217657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217733 | orchestrator | 2025-09-19 00:58:26.217747 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-19 00:58:26.217757 | orchestrator | Friday 19 September 2025 00:57:21 +0000 (0:00:03.289) 0:00:57.200 ****** 2025-09-19 00:58:26.217767 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:58:26.217777 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.217787 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:58:26.217796 | orchestrator | 2025-09-19 00:58:26.217805 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-19 00:58:26.217815 | orchestrator | Friday 19 September 2025 00:57:24 +0000 (0:00:02.327) 0:00:59.528 ****** 2025-09-19 00:58:26.217825 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 00:58:26.217834 | orchestrator | 2025-09-19 00:58:26.217844 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-19 00:58:26.217853 | orchestrator | Friday 19 September 2025 00:57:25 +0000 (0:00:01.129) 0:01:00.657 ****** 2025-09-19 00:58:26.217863 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.217872 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:58:26.217881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:58:26.217891 | orchestrator | 2025-09-19 00:58:26.217900 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-19 00:58:26.217910 | orchestrator | Friday 19 September 2025 00:57:26 +0000 (0:00:01.063) 0:01:01.721 ****** 2025-09-19 00:58:26.217920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.217935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.217951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.217966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.217997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218081 | orchestrator | 2025-09-19 00:58:26.218091 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-19 00:58:26.218100 | orchestrator | Friday 19 September 2025 00:57:36 +0000 (0:00:10.639) 0:01:12.360 ****** 2025-09-19 00:58:26.218115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.218125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.218135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.218145 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.218166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.218177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.218187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.218197 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:58:26.218211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 00:58:26.218224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.218241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:58:26.218252 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:58:26.218263 | orchestrator | 2025-09-19 00:58:26.218273 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-19 00:58:26.218284 | orchestrator | Friday 19 September 2025 00:57:37 +0000 (0:00:00.811) 0:01:13.172 ****** 2025-09-19 00:58:26.218303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.218320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.218332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 00:58:26.218343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:58:26.218432 | orchestrator | 2025-09-19 00:58:26.218442 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 00:58:26.218451 | orchestrator | Friday 19 September 2025 00:57:41 +0000 (0:00:03.473) 0:01:16.646 ****** 2025-09-19 00:58:26.218461 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:58:26.218471 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:58:26.218485 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:58:26.218495 | orchestrator | 2025-09-19 00:58:26.218541 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-19 00:58:26.218551 | orchestrator | Friday 19 September 2025 00:57:41 +0000 (0:00:00.510) 0:01:17.156 ****** 2025-09-19 00:58:26.218561 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.218570 | orchestrator | 2025-09-19 00:58:26.218580 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-19 00:58:26.218589 | orchestrator | Friday 19 September 2025 00:57:44 +0000 (0:00:02.305) 0:01:19.462 ****** 2025-09-19 00:58:26.218599 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.218608 | orchestrator | 2025-09-19 00:58:26.218618 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-19 00:58:26.218627 | orchestrator | Friday 19 September 2025 00:57:46 +0000 (0:00:02.603) 0:01:22.065 ****** 2025-09-19 00:58:26.218637 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.218646 | orchestrator | 2025-09-19 00:58:26.218655 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 00:58:26.218665 | orchestrator | Friday 19 September 2025 00:57:58 +0000 (0:00:12.078) 0:01:34.144 ****** 2025-09-19 00:58:26.218674 | orchestrator | 2025-09-19 00:58:26.218684 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 00:58:26.218693 | orchestrator | Friday 19 September 2025 00:57:58 +0000 (0:00:00.108) 0:01:34.253 ****** 2025-09-19 00:58:26.218703 | orchestrator | 2025-09-19 00:58:26.218712 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 00:58:26.218722 | orchestrator | Friday 19 September 2025 00:57:58 +0000 (0:00:00.060) 0:01:34.314 ****** 2025-09-19 00:58:26.218731 | orchestrator | 2025-09-19 00:58:26.218741 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-19 00:58:26.218750 | orchestrator | Friday 19 September 2025 00:57:58 +0000 (0:00:00.062) 0:01:34.376 ****** 2025-09-19 00:58:26.218760 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.218769 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:58:26.218779 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:58:26.218788 | orchestrator | 2025-09-19 00:58:26.218797 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-19 00:58:26.218807 | orchestrator | Friday 19 September 2025 00:58:06 +0000 (0:00:07.458) 0:01:41.835 ****** 2025-09-19 00:58:26.218817 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.218826 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:58:26.218841 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:58:26.218851 | orchestrator | 2025-09-19 00:58:26.218860 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-19 00:58:26.218870 | orchestrator | Friday 19 September 2025 00:58:12 +0000 (0:00:06.061) 0:01:47.897 ****** 2025-09-19 00:58:26.218880 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:58:26.218889 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:58:26.218898 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:58:26.218908 | orchestrator | 2025-09-19 00:58:26.218923 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:58:26.218940 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 00:58:26.218958 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:58:26.218974 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:58:26.218989 | orchestrator | 2025-09-19 00:58:26.219003 | orchestrator | 2025-09-19 00:58:26.219019 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:58:26.219036 | orchestrator | Friday 19 September 2025 00:58:23 +0000 (0:00:10.527) 0:01:58.424 ****** 2025-09-19 00:58:26.219061 | orchestrator | =============================================================================== 2025-09-19 00:58:26.219075 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.87s 2025-09-19 00:58:26.219089 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.08s 2025-09-19 00:58:26.219104 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.64s 2025-09-19 00:58:26.219127 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.53s 2025-09-19 00:58:26.219144 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.46s 2025-09-19 00:58:26.219158 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.88s 2025-09-19 00:58:26.219174 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.06s 2025-09-19 00:58:26.219190 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.70s 2025-09-19 00:58:26.219205 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.89s 2025-09-19 00:58:26.219221 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.69s 2025-09-19 00:58:26.219237 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.63s 2025-09-19 00:58:26.219252 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.57s 2025-09-19 00:58:26.219268 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.47s 2025-09-19 00:58:26.219285 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.29s 2025-09-19 00:58:26.219301 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.60s 2025-09-19 00:58:26.219317 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.34s 2025-09-19 00:58:26.219334 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.33s 2025-09-19 00:58:26.219351 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.31s 2025-09-19 00:58:26.219363 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.77s 2025-09-19 00:58:26.219373 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.69s 2025-09-19 00:58:26.219383 | orchestrator | 2025-09-19 00:58:26 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:26.219393 | orchestrator | 2025-09-19 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:29.236132 | orchestrator | 2025-09-19 00:58:29 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:29.237571 | orchestrator | 2025-09-19 00:58:29 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:29.238144 | orchestrator | 2025-09-19 00:58:29 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:29.238608 | orchestrator | 2025-09-19 00:58:29 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:29.238662 | orchestrator | 2025-09-19 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:32.268055 | orchestrator | 2025-09-19 00:58:32 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:32.268582 | orchestrator | 2025-09-19 00:58:32 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:32.269693 | orchestrator | 2025-09-19 00:58:32 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:32.270392 | orchestrator | 2025-09-19 00:58:32 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:32.270413 | orchestrator | 2025-09-19 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:35.323775 | orchestrator | 2025-09-19 00:58:35 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:35.324135 | orchestrator | 2025-09-19 00:58:35 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:35.324886 | orchestrator | 2025-09-19 00:58:35 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:35.325556 | orchestrator | 2025-09-19 00:58:35 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:35.325604 | orchestrator | 2025-09-19 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:38.358154 | orchestrator | 2025-09-19 00:58:38 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:38.359433 | orchestrator | 2025-09-19 00:58:38 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:38.361452 | orchestrator | 2025-09-19 00:58:38 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:38.362313 | orchestrator | 2025-09-19 00:58:38 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:38.362345 | orchestrator | 2025-09-19 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:41.399214 | orchestrator | 2025-09-19 00:58:41 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:41.400713 | orchestrator | 2025-09-19 00:58:41 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:41.401309 | orchestrator | 2025-09-19 00:58:41 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:41.402144 | orchestrator | 2025-09-19 00:58:41 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:41.402172 | orchestrator | 2025-09-19 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:44.433846 | orchestrator | 2025-09-19 00:58:44 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:44.435539 | orchestrator | 2025-09-19 00:58:44 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:44.437238 | orchestrator | 2025-09-19 00:58:44 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:44.438658 | orchestrator | 2025-09-19 00:58:44 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:44.438700 | orchestrator | 2025-09-19 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:47.493601 | orchestrator | 2025-09-19 00:58:47 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:47.493705 | orchestrator | 2025-09-19 00:58:47 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:47.494186 | orchestrator | 2025-09-19 00:58:47 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:47.494917 | orchestrator | 2025-09-19 00:58:47 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:47.494939 | orchestrator | 2025-09-19 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:50.536067 | orchestrator | 2025-09-19 00:58:50 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:50.538417 | orchestrator | 2025-09-19 00:58:50 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:50.540113 | orchestrator | 2025-09-19 00:58:50 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:50.541894 | orchestrator | 2025-09-19 00:58:50 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:50.541973 | orchestrator | 2025-09-19 00:58:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:53.597587 | orchestrator | 2025-09-19 00:58:53 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:53.598395 | orchestrator | 2025-09-19 00:58:53 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:53.600174 | orchestrator | 2025-09-19 00:58:53 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:53.603100 | orchestrator | 2025-09-19 00:58:53 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:53.603160 | orchestrator | 2025-09-19 00:58:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:56.642267 | orchestrator | 2025-09-19 00:58:56 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:56.642338 | orchestrator | 2025-09-19 00:58:56 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:56.642981 | orchestrator | 2025-09-19 00:58:56 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:56.643891 | orchestrator | 2025-09-19 00:58:56 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:56.643910 | orchestrator | 2025-09-19 00:58:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:58:59.684943 | orchestrator | 2025-09-19 00:58:59 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:58:59.687012 | orchestrator | 2025-09-19 00:58:59 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:58:59.688238 | orchestrator | 2025-09-19 00:58:59 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:58:59.689627 | orchestrator | 2025-09-19 00:58:59 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:58:59.689671 | orchestrator | 2025-09-19 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:02.738673 | orchestrator | 2025-09-19 00:59:02 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:59:02.739371 | orchestrator | 2025-09-19 00:59:02 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:02.741666 | orchestrator | 2025-09-19 00:59:02 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:02.742608 | orchestrator | 2025-09-19 00:59:02 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:02.742654 | orchestrator | 2025-09-19 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:05.773881 | orchestrator | 2025-09-19 00:59:05 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:59:05.774217 | orchestrator | 2025-09-19 00:59:05 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:05.775198 | orchestrator | 2025-09-19 00:59:05 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:05.776423 | orchestrator | 2025-09-19 00:59:05 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:05.776454 | orchestrator | 2025-09-19 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:08.816508 | orchestrator | 2025-09-19 00:59:08 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:59:08.816584 | orchestrator | 2025-09-19 00:59:08 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:08.817147 | orchestrator | 2025-09-19 00:59:08 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:08.818083 | orchestrator | 2025-09-19 00:59:08 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:08.818199 | orchestrator | 2025-09-19 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:11.859604 | orchestrator | 2025-09-19 00:59:11 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:59:11.861438 | orchestrator | 2025-09-19 00:59:11 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:11.862965 | orchestrator | 2025-09-19 00:59:11 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:11.864477 | orchestrator | 2025-09-19 00:59:11 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:11.864693 | orchestrator | 2025-09-19 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:14.908659 | orchestrator | 2025-09-19 00:59:14 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:59:14.912121 | orchestrator | 2025-09-19 00:59:14 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:14.914393 | orchestrator | 2025-09-19 00:59:14 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:14.916515 | orchestrator | 2025-09-19 00:59:14 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:14.916549 | orchestrator | 2025-09-19 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:17.953125 | orchestrator | 2025-09-19 00:59:17 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state STARTED 2025-09-19 00:59:17.953965 | orchestrator | 2025-09-19 00:59:17 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:17.956279 | orchestrator | 2025-09-19 00:59:17 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:17.958634 | orchestrator | 2025-09-19 00:59:17 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:17.959004 | orchestrator | 2025-09-19 00:59:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:21.003760 | orchestrator | 2025-09-19 00:59:21.003864 | orchestrator | 2025-09-19 00:59:21.003891 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:59:21.004026 | orchestrator | 2025-09-19 00:59:21.004170 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:59:21.004265 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.380) 0:00:00.380 ****** 2025-09-19 00:59:21.004291 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:59:21.004315 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:59:21.004333 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:59:21.004345 | orchestrator | 2025-09-19 00:59:21.004359 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:59:21.004372 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.436) 0:00:00.816 ****** 2025-09-19 00:59:21.004385 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-19 00:59:21.004398 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-19 00:59:21.004411 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-19 00:59:21.004423 | orchestrator | 2025-09-19 00:59:21.004435 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-19 00:59:21.004447 | orchestrator | 2025-09-19 00:59:21.004487 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 00:59:21.004501 | orchestrator | Friday 19 September 2025 00:56:26 +0000 (0:00:00.571) 0:00:01.387 ****** 2025-09-19 00:59:21.004551 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:59:21.004566 | orchestrator | 2025-09-19 00:59:21.004578 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-19 00:59:21.004591 | orchestrator | Friday 19 September 2025 00:56:26 +0000 (0:00:00.599) 0:00:01.987 ****** 2025-09-19 00:59:21.004604 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-19 00:59:21.004616 | orchestrator | 2025-09-19 00:59:21.004629 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-19 00:59:21.004643 | orchestrator | Friday 19 September 2025 00:56:30 +0000 (0:00:03.640) 0:00:05.628 ****** 2025-09-19 00:59:21.004655 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-19 00:59:21.004666 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-19 00:59:21.004677 | orchestrator | 2025-09-19 00:59:21.004692 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-19 00:59:21.004708 | orchestrator | Friday 19 September 2025 00:56:37 +0000 (0:00:06.628) 0:00:12.256 ****** 2025-09-19 00:59:21.004720 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 00:59:21.004731 | orchestrator | 2025-09-19 00:59:21.004741 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-19 00:59:21.004756 | orchestrator | Friday 19 September 2025 00:56:40 +0000 (0:00:03.184) 0:00:15.441 ****** 2025-09-19 00:59:21.004776 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 00:59:21.004796 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-19 00:59:21.004816 | orchestrator | 2025-09-19 00:59:21.004836 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-19 00:59:21.004856 | orchestrator | Friday 19 September 2025 00:56:44 +0000 (0:00:04.151) 0:00:19.592 ****** 2025-09-19 00:59:21.004875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 00:59:21.004944 | orchestrator | 2025-09-19 00:59:21.004957 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-19 00:59:21.004968 | orchestrator | Friday 19 September 2025 00:56:47 +0000 (0:00:03.533) 0:00:23.126 ****** 2025-09-19 00:59:21.004979 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-19 00:59:21.004989 | orchestrator | 2025-09-19 00:59:21.005000 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-19 00:59:21.005011 | orchestrator | Friday 19 September 2025 00:56:51 +0000 (0:00:03.896) 0:00:27.022 ****** 2025-09-19 00:59:21.005025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.005063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.005093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.005106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005314 | orchestrator | 2025-09-19 00:59:21.005325 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-19 00:59:21.005336 | orchestrator | Friday 19 September 2025 00:56:54 +0000 (0:00:03.074) 0:00:30.096 ****** 2025-09-19 00:59:21.005347 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.005358 | orchestrator | 2025-09-19 00:59:21.005368 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-19 00:59:21.005379 | orchestrator | Friday 19 September 2025 00:56:55 +0000 (0:00:00.137) 0:00:30.234 ****** 2025-09-19 00:59:21.005389 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.005400 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:21.005411 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:21.005422 | orchestrator | 2025-09-19 00:59:21.005432 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 00:59:21.005443 | orchestrator | Friday 19 September 2025 00:56:55 +0000 (0:00:00.294) 0:00:30.529 ****** 2025-09-19 00:59:21.005453 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:59:21.005503 | orchestrator | 2025-09-19 00:59:21.005524 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-19 00:59:21.005553 | orchestrator | Friday 19 September 2025 00:56:56 +0000 (0:00:00.761) 0:00:31.290 ****** 2025-09-19 00:59:21.005583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.005615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.005635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.005647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.005974 | orchestrator | 2025-09-19 00:59:21.005992 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-19 00:59:21.006012 | orchestrator | Friday 19 September 2025 00:57:02 +0000 (0:00:06.460) 0:00:37.751 ****** 2025-09-19 00:59:21.006183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.006219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.006263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006322 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.006334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.006352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name2025-09-19 00:59:21 | INFO  | Task f3b70418-c60a-4b25-8ce7-0fdbe4c259f2 is in state SUCCESS 2025-09-19 00:59:21.006809 | orchestrator | ': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.006926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.006988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007002 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:21.007015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.007044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.007061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007116 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:21.007127 | orchestrator | 2025-09-19 00:59:21.007138 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-19 00:59:21.007150 | orchestrator | Friday 19 September 2025 00:57:04 +0000 (0:00:01.795) 0:00:39.547 ****** 2025-09-19 00:59:21.007162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.007180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.007197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007249 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.007260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.007278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.007294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.007313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.007324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.007446 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:21.007488 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:21.007502 | orchestrator | 2025-09-19 00:59:21.007515 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-19 00:59:21.007527 | orchestrator | Friday 19 September 2025 00:57:05 +0000 (0:00:01.490) 0:00:41.037 ****** 2025-09-19 00:59:21.007540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.007571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.007590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.007611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007845 | orchestrator | 2025-09-19 00:59:21.007856 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-19 00:59:21.007867 | orchestrator | Friday 19 September 2025 00:57:12 +0000 (0:00:06.867) 0:00:47.905 ****** 2025-09-19 00:59:21.007878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.007896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.007919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.007930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.007982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008137 | orchestrator | 2025-09-19 00:59:21.008148 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-19 00:59:21.008159 | orchestrator | Friday 19 September 2025 00:57:33 +0000 (0:00:20.657) 0:01:08.562 ****** 2025-09-19 00:59:21.008170 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 00:59:21.008181 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 00:59:21.008192 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 00:59:21.008203 | orchestrator | 2025-09-19 00:59:21.008213 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-19 00:59:21.008225 | orchestrator | Friday 19 September 2025 00:57:39 +0000 (0:00:05.645) 0:01:14.208 ****** 2025-09-19 00:59:21.008235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 00:59:21.008246 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 00:59:21.008257 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 00:59:21.008267 | orchestrator | 2025-09-19 00:59:21.008278 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-19 00:59:21.008289 | orchestrator | Friday 19 September 2025 00:57:42 +0000 (0:00:03.704) 0:01:17.912 ****** 2025-09-19 00:59:21.008313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008579 | orchestrator | 2025-09-19 00:59:21.008590 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-19 00:59:21.008601 | orchestrator | Friday 19 September 2025 00:57:45 +0000 (0:00:02.765) 0:01:20.678 ****** 2025-09-19 00:59:21.008620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.008835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.008875 | orchestrator | 2025-09-19 00:59:21.008886 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 00:59:21.008897 | orchestrator | Friday 19 September 2025 00:57:48 +0000 (0:00:02.799) 0:01:23.477 ****** 2025-09-19 00:59:21.008907 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.008919 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:21.008930 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:21.008941 | orchestrator | 2025-09-19 00:59:21.008951 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-19 00:59:21.008962 | orchestrator | Friday 19 September 2025 00:57:48 +0000 (0:00:00.412) 0:01:23.889 ****** 2025-09-19 00:59:21.008980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.008997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.009008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.009021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.009038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009135 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:21.009147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009158 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.009175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 00:59:21.009191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 00:59:21.009203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 00:59:21.009254 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:21.009266 | orchestrator | 2025-09-19 00:59:21.009277 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-19 00:59:21.009288 | orchestrator | Friday 19 September 2025 00:57:50 +0000 (0:00:01.483) 0:01:25.373 ****** 2025-09-19 00:59:21.009306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.009323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.009335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 00:59:21.009353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 00:59:21.009675 | orchestrator | 2025-09-19 00:59:21.009687 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 00:59:21.009698 | orchestrator | Friday 19 September 2025 00:57:55 +0000 (0:00:05.459) 0:01:30.833 ****** 2025-09-19 00:59:21.009709 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:21.009720 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:21.009731 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:21.009741 | orchestrator | 2025-09-19 00:59:21.009752 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-19 00:59:21.009763 | orchestrator | Friday 19 September 2025 00:57:56 +0000 (0:00:00.568) 0:01:31.401 ****** 2025-09-19 00:59:21.009775 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-19 00:59:21.009786 | orchestrator | 2025-09-19 00:59:21.009796 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-19 00:59:21.009807 | orchestrator | Friday 19 September 2025 00:57:58 +0000 (0:00:02.288) 0:01:33.690 ****** 2025-09-19 00:59:21.009818 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 00:59:21.009829 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-19 00:59:21.009840 | orchestrator | 2025-09-19 00:59:21.009851 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-19 00:59:21.009868 | orchestrator | Friday 19 September 2025 00:58:01 +0000 (0:00:02.872) 0:01:36.562 ****** 2025-09-19 00:59:21.009879 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.009890 | orchestrator | 2025-09-19 00:59:21.009901 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 00:59:21.009912 | orchestrator | Friday 19 September 2025 00:58:15 +0000 (0:00:14.327) 0:01:50.890 ****** 2025-09-19 00:59:21.009923 | orchestrator | 2025-09-19 00:59:21.009940 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 00:59:21.009952 | orchestrator | Friday 19 September 2025 00:58:15 +0000 (0:00:00.065) 0:01:50.955 ****** 2025-09-19 00:59:21.009963 | orchestrator | 2025-09-19 00:59:21.009974 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 00:59:21.009985 | orchestrator | Friday 19 September 2025 00:58:15 +0000 (0:00:00.080) 0:01:51.036 ****** 2025-09-19 00:59:21.009995 | orchestrator | 2025-09-19 00:59:21.010006 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-19 00:59:21.010049 | orchestrator | Friday 19 September 2025 00:58:15 +0000 (0:00:00.075) 0:01:51.112 ****** 2025-09-19 00:59:21.010063 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010074 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:21.010085 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:21.010096 | orchestrator | 2025-09-19 00:59:21.010112 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-19 00:59:21.010124 | orchestrator | Friday 19 September 2025 00:58:24 +0000 (0:00:08.636) 0:01:59.749 ****** 2025-09-19 00:59:21.010134 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:21.010145 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:21.010156 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010167 | orchestrator | 2025-09-19 00:59:21.010177 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-19 00:59:21.010189 | orchestrator | Friday 19 September 2025 00:58:34 +0000 (0:00:09.896) 0:02:09.646 ****** 2025-09-19 00:59:21.010199 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010212 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:21.010223 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:21.010233 | orchestrator | 2025-09-19 00:59:21.010244 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-19 00:59:21.010255 | orchestrator | Friday 19 September 2025 00:58:45 +0000 (0:00:11.053) 0:02:20.699 ****** 2025-09-19 00:59:21.010266 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:21.010277 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010288 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:21.010299 | orchestrator | 2025-09-19 00:59:21.010310 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-19 00:59:21.010320 | orchestrator | Friday 19 September 2025 00:58:55 +0000 (0:00:10.421) 0:02:31.121 ****** 2025-09-19 00:59:21.010331 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:21.010342 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010353 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:21.010364 | orchestrator | 2025-09-19 00:59:21.010376 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-19 00:59:21.010387 | orchestrator | Friday 19 September 2025 00:59:06 +0000 (0:00:10.776) 0:02:41.898 ****** 2025-09-19 00:59:21.010399 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010411 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:21.010422 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:21.010433 | orchestrator | 2025-09-19 00:59:21.010443 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-19 00:59:21.010454 | orchestrator | Friday 19 September 2025 00:59:13 +0000 (0:00:06.679) 0:02:48.578 ****** 2025-09-19 00:59:21.010516 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:21.010529 | orchestrator | 2025-09-19 00:59:21.010540 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:59:21.010552 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 00:59:21.010563 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:59:21.010575 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:59:21.010595 | orchestrator | 2025-09-19 00:59:21.010606 | orchestrator | 2025-09-19 00:59:21.010617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:59:21.010628 | orchestrator | Friday 19 September 2025 00:59:20 +0000 (0:00:07.219) 0:02:55.797 ****** 2025-09-19 00:59:21.010639 | orchestrator | =============================================================================== 2025-09-19 00:59:21.010650 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.66s 2025-09-19 00:59:21.010661 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.33s 2025-09-19 00:59:21.010672 | orchestrator | designate : Restart designate-central container ------------------------ 11.05s 2025-09-19 00:59:21.010684 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.78s 2025-09-19 00:59:21.010695 | orchestrator | designate : Restart designate-producer container ----------------------- 10.42s 2025-09-19 00:59:21.010706 | orchestrator | designate : Restart designate-api container ----------------------------- 9.90s 2025-09-19 00:59:21.010717 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.64s 2025-09-19 00:59:21.010729 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.22s 2025-09-19 00:59:21.010740 | orchestrator | designate : Copying over config.json files for services ----------------- 6.87s 2025-09-19 00:59:21.010751 | orchestrator | designate : Restart designate-worker container -------------------------- 6.68s 2025-09-19 00:59:21.010772 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.63s 2025-09-19 00:59:21.010784 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.46s 2025-09-19 00:59:21.010794 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.65s 2025-09-19 00:59:21.010804 | orchestrator | designate : Check designate containers ---------------------------------- 5.46s 2025-09-19 00:59:21.010813 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.15s 2025-09-19 00:59:21.010823 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.90s 2025-09-19 00:59:21.010833 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.70s 2025-09-19 00:59:21.010843 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.64s 2025-09-19 00:59:21.010853 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.53s 2025-09-19 00:59:21.010864 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.18s 2025-09-19 00:59:21.010874 | orchestrator | 2025-09-19 00:59:21 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:21.010890 | orchestrator | 2025-09-19 00:59:21 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:21.010901 | orchestrator | 2025-09-19 00:59:21 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:21.010911 | orchestrator | 2025-09-19 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:24.055648 | orchestrator | 2025-09-19 00:59:24 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:24.059305 | orchestrator | 2025-09-19 00:59:24 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:24.061780 | orchestrator | 2025-09-19 00:59:24 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:24.063540 | orchestrator | 2025-09-19 00:59:24 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:24.063644 | orchestrator | 2025-09-19 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:27.102160 | orchestrator | 2025-09-19 00:59:27 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:27.103664 | orchestrator | 2025-09-19 00:59:27 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:27.105493 | orchestrator | 2025-09-19 00:59:27 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:27.107240 | orchestrator | 2025-09-19 00:59:27 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:27.107287 | orchestrator | 2025-09-19 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:30.146422 | orchestrator | 2025-09-19 00:59:30 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:30.147378 | orchestrator | 2025-09-19 00:59:30 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:30.148570 | orchestrator | 2025-09-19 00:59:30 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:30.151525 | orchestrator | 2025-09-19 00:59:30 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:30.151558 | orchestrator | 2025-09-19 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:33.198332 | orchestrator | 2025-09-19 00:59:33 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:33.200329 | orchestrator | 2025-09-19 00:59:33 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:33.203217 | orchestrator | 2025-09-19 00:59:33 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:33.205573 | orchestrator | 2025-09-19 00:59:33 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:33.205892 | orchestrator | 2025-09-19 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:36.253973 | orchestrator | 2025-09-19 00:59:36 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:36.254131 | orchestrator | 2025-09-19 00:59:36 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:36.255249 | orchestrator | 2025-09-19 00:59:36 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:36.256057 | orchestrator | 2025-09-19 00:59:36 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:36.256082 | orchestrator | 2025-09-19 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:39.323348 | orchestrator | 2025-09-19 00:59:39 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:39.325133 | orchestrator | 2025-09-19 00:59:39 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:39.327242 | orchestrator | 2025-09-19 00:59:39 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state STARTED 2025-09-19 00:59:39.330524 | orchestrator | 2025-09-19 00:59:39 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:39.331848 | orchestrator | 2025-09-19 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:42.377681 | orchestrator | 2025-09-19 00:59:42 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:42.379668 | orchestrator | 2025-09-19 00:59:42 | INFO  | Task 908bf08e-0fd4-44c4-9d72-348091e3ba4d is in state STARTED 2025-09-19 00:59:42.382826 | orchestrator | 2025-09-19 00:59:42 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:42.386326 | orchestrator | 2025-09-19 00:59:42 | INFO  | Task 7442900b-b905-4d01-8543-ef1f9028db1f is in state SUCCESS 2025-09-19 00:59:42.387805 | orchestrator | 2025-09-19 00:59:42.387858 | orchestrator | 2025-09-19 00:59:42.387903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 00:59:42.387922 | orchestrator | 2025-09-19 00:59:42.387940 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 00:59:42.387958 | orchestrator | Friday 19 September 2025 00:58:29 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-09-19 00:59:42.387976 | orchestrator | ok: [testbed-node-0] 2025-09-19 00:59:42.387995 | orchestrator | ok: [testbed-node-1] 2025-09-19 00:59:42.388014 | orchestrator | ok: [testbed-node-2] 2025-09-19 00:59:42.388032 | orchestrator | 2025-09-19 00:59:42.388052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 00:59:42.388063 | orchestrator | Friday 19 September 2025 00:58:29 +0000 (0:00:00.328) 0:00:00.528 ****** 2025-09-19 00:59:42.388075 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-19 00:59:42.388086 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-19 00:59:42.388097 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-19 00:59:42.388108 | orchestrator | 2025-09-19 00:59:42.388120 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-19 00:59:42.388130 | orchestrator | 2025-09-19 00:59:42.388141 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 00:59:42.388152 | orchestrator | Friday 19 September 2025 00:58:30 +0000 (0:00:00.679) 0:00:01.208 ****** 2025-09-19 00:59:42.388163 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:59:42.388175 | orchestrator | 2025-09-19 00:59:42.388185 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-19 00:59:42.388196 | orchestrator | Friday 19 September 2025 00:58:31 +0000 (0:00:00.660) 0:00:01.869 ****** 2025-09-19 00:59:42.388207 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-19 00:59:42.388218 | orchestrator | 2025-09-19 00:59:42.388228 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-19 00:59:42.388239 | orchestrator | Friday 19 September 2025 00:58:35 +0000 (0:00:03.735) 0:00:05.604 ****** 2025-09-19 00:59:42.388249 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-19 00:59:42.388260 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-19 00:59:42.388271 | orchestrator | 2025-09-19 00:59:42.388282 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-19 00:59:42.388292 | orchestrator | Friday 19 September 2025 00:58:41 +0000 (0:00:06.745) 0:00:12.350 ****** 2025-09-19 00:59:42.388303 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 00:59:42.388314 | orchestrator | 2025-09-19 00:59:42.388325 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-19 00:59:42.388335 | orchestrator | Friday 19 September 2025 00:58:45 +0000 (0:00:03.299) 0:00:15.649 ****** 2025-09-19 00:59:42.388346 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 00:59:42.388357 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-19 00:59:42.388368 | orchestrator | 2025-09-19 00:59:42.388381 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-19 00:59:42.388393 | orchestrator | Friday 19 September 2025 00:58:49 +0000 (0:00:04.006) 0:00:19.656 ****** 2025-09-19 00:59:42.388405 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 00:59:42.388417 | orchestrator | 2025-09-19 00:59:42.388430 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-19 00:59:42.388466 | orchestrator | Friday 19 September 2025 00:58:52 +0000 (0:00:03.243) 0:00:22.899 ****** 2025-09-19 00:59:42.388479 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-19 00:59:42.388491 | orchestrator | 2025-09-19 00:59:42.388503 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 00:59:42.388525 | orchestrator | Friday 19 September 2025 00:58:56 +0000 (0:00:04.102) 0:00:27.001 ****** 2025-09-19 00:59:42.388537 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:42.388549 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:42.388561 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:42.388573 | orchestrator | 2025-09-19 00:59:42.388585 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-19 00:59:42.388597 | orchestrator | Friday 19 September 2025 00:58:56 +0000 (0:00:00.298) 0:00:27.299 ****** 2025-09-19 00:59:42.388626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.388660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.388675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.388688 | orchestrator | 2025-09-19 00:59:42.388701 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-19 00:59:42.388719 | orchestrator | Friday 19 September 2025 00:58:57 +0000 (0:00:01.037) 0:00:28.337 ****** 2025-09-19 00:59:42.388738 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:42.388757 | orchestrator | 2025-09-19 00:59:42.388776 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-19 00:59:42.388795 | orchestrator | Friday 19 September 2025 00:58:57 +0000 (0:00:00.141) 0:00:28.479 ****** 2025-09-19 00:59:42.388814 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:42.388833 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:42.388866 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:42.388883 | orchestrator | 2025-09-19 00:59:42.388894 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 00:59:42.388905 | orchestrator | Friday 19 September 2025 00:58:58 +0000 (0:00:00.505) 0:00:28.984 ****** 2025-09-19 00:59:42.388916 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 00:59:42.388927 | orchestrator | 2025-09-19 00:59:42.388937 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-19 00:59:42.388948 | orchestrator | Friday 19 September 2025 00:58:58 +0000 (0:00:00.515) 0:00:29.499 ****** 2025-09-19 00:59:42.388960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389176 | orchestrator | 2025-09-19 00:59:42.389195 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-19 00:59:42.389216 | orchestrator | Friday 19 September 2025 00:59:00 +0000 (0:00:01.499) 0:00:30.998 ****** 2025-09-19 00:59:42.389229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.389254 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:42.389278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.389296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:42.389335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.389355 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:42.389373 | orchestrator | 2025-09-19 00:59:42.389384 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-19 00:59:42.389395 | orchestrator | Friday 19 September 2025 00:59:01 +0000 (0:00:00.719) 0:00:31.718 ****** 2025-09-19 00:59:42.389406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.389418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:42.389429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.389569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:42.389586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.389598 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:42.389608 | orchestrator | 2025-09-19 00:59:42.389619 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-19 00:59:42.389630 | orchestrator | Friday 19 September 2025 00:59:01 +0000 (0:00:00.740) 0:00:32.459 ****** 2025-09-19 00:59:42.389659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389733 | orchestrator | 2025-09-19 00:59:42.389748 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-19 00:59:42.389758 | orchestrator | Friday 19 September 2025 00:59:03 +0000 (0:00:01.370) 0:00:33.829 ****** 2025-09-19 00:59:42.389770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.389819 | orchestrator | 2025-09-19 00:59:42.389830 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-19 00:59:42.389849 | orchestrator | Friday 19 September 2025 00:59:05 +0000 (0:00:02.346) 0:00:36.175 ****** 2025-09-19 00:59:42.389860 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 00:59:42.389871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 00:59:42.389882 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 00:59:42.389893 | orchestrator | 2025-09-19 00:59:42.389903 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-19 00:59:42.389914 | orchestrator | Friday 19 September 2025 00:59:07 +0000 (0:00:01.582) 0:00:37.758 ****** 2025-09-19 00:59:42.389925 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:42.389935 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:42.389946 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:42.389957 | orchestrator | 2025-09-19 00:59:42.389967 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-19 00:59:42.389978 | orchestrator | Friday 19 September 2025 00:59:08 +0000 (0:00:01.430) 0:00:39.188 ****** 2025-09-19 00:59:42.389989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.390001 | orchestrator | skipping: [testbed-node-0] 2025-09-19 00:59:42.390062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.390077 | orchestrator | skipping: [testbed-node-2] 2025-09-19 00:59:42.390102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 00:59:42.390131 | orchestrator | skipping: [testbed-node-1] 2025-09-19 00:59:42.390142 | orchestrator | 2025-09-19 00:59:42.390153 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-19 00:59:42.390164 | orchestrator | Friday 19 September 2025 00:59:09 +0000 (0:00:00.525) 0:00:39.713 ****** 2025-09-19 00:59:42.390175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.390187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.390201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 00:59:42.390214 | orchestrator | 2025-09-19 00:59:42.390226 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-19 00:59:42.390238 | orchestrator | Friday 19 September 2025 00:59:10 +0000 (0:00:01.346) 0:00:41.060 ****** 2025-09-19 00:59:42.390251 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:42.390263 | orchestrator | 2025-09-19 00:59:42.390275 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-19 00:59:42.390287 | orchestrator | Friday 19 September 2025 00:59:12 +0000 (0:00:02.268) 0:00:43.328 ****** 2025-09-19 00:59:42.390300 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:42.390311 | orchestrator | 2025-09-19 00:59:42.390329 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-19 00:59:42.390341 | orchestrator | Friday 19 September 2025 00:59:15 +0000 (0:00:02.577) 0:00:45.905 ****** 2025-09-19 00:59:42.390368 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:42.390381 | orchestrator | 2025-09-19 00:59:42.390393 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 00:59:42.390405 | orchestrator | Friday 19 September 2025 00:59:29 +0000 (0:00:14.211) 0:01:00.116 ****** 2025-09-19 00:59:42.390417 | orchestrator | 2025-09-19 00:59:42.390430 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 00:59:42.390494 | orchestrator | Friday 19 September 2025 00:59:29 +0000 (0:00:00.063) 0:01:00.180 ****** 2025-09-19 00:59:42.390508 | orchestrator | 2025-09-19 00:59:42.390521 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 00:59:42.390533 | orchestrator | Friday 19 September 2025 00:59:29 +0000 (0:00:00.065) 0:01:00.245 ****** 2025-09-19 00:59:42.390545 | orchestrator | 2025-09-19 00:59:42.390558 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-19 00:59:42.390570 | orchestrator | Friday 19 September 2025 00:59:29 +0000 (0:00:00.066) 0:01:00.311 ****** 2025-09-19 00:59:42.390581 | orchestrator | changed: [testbed-node-0] 2025-09-19 00:59:42.390592 | orchestrator | changed: [testbed-node-2] 2025-09-19 00:59:42.390603 | orchestrator | changed: [testbed-node-1] 2025-09-19 00:59:42.390613 | orchestrator | 2025-09-19 00:59:42.390624 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 00:59:42.390636 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 00:59:42.390649 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:59:42.390659 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 00:59:42.390670 | orchestrator | 2025-09-19 00:59:42.390681 | orchestrator | 2025-09-19 00:59:42.390691 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 00:59:42.390702 | orchestrator | Friday 19 September 2025 00:59:39 +0000 (0:00:10.045) 0:01:10.357 ****** 2025-09-19 00:59:42.390713 | orchestrator | =============================================================================== 2025-09-19 00:59:42.390723 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.21s 2025-09-19 00:59:42.390734 | orchestrator | placement : Restart placement-api container ---------------------------- 10.05s 2025-09-19 00:59:42.390745 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.75s 2025-09-19 00:59:42.390755 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.10s 2025-09-19 00:59:42.390766 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.01s 2025-09-19 00:59:42.390777 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.74s 2025-09-19 00:59:42.390787 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.30s 2025-09-19 00:59:42.390798 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.24s 2025-09-19 00:59:42.390809 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.58s 2025-09-19 00:59:42.390819 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.35s 2025-09-19 00:59:42.390830 | orchestrator | placement : Creating placement databases -------------------------------- 2.27s 2025-09-19 00:59:42.390841 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.58s 2025-09-19 00:59:42.390851 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.50s 2025-09-19 00:59:42.390862 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.43s 2025-09-19 00:59:42.390873 | orchestrator | placement : Copying over config.json files for services ----------------- 1.37s 2025-09-19 00:59:42.390892 | orchestrator | placement : Check placement containers ---------------------------------- 1.35s 2025-09-19 00:59:42.390903 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.04s 2025-09-19 00:59:42.390913 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.74s 2025-09-19 00:59:42.390924 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.72s 2025-09-19 00:59:42.390935 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-09-19 00:59:42.390946 | orchestrator | 2025-09-19 00:59:42 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:42.390956 | orchestrator | 2025-09-19 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:45.441893 | orchestrator | 2025-09-19 00:59:45 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:45.443769 | orchestrator | 2025-09-19 00:59:45 | INFO  | Task 908bf08e-0fd4-44c4-9d72-348091e3ba4d is in state STARTED 2025-09-19 00:59:45.445114 | orchestrator | 2025-09-19 00:59:45 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:45.446707 | orchestrator | 2025-09-19 00:59:45 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:45.446752 | orchestrator | 2025-09-19 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:48.490975 | orchestrator | 2025-09-19 00:59:48 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:48.492841 | orchestrator | 2025-09-19 00:59:48 | INFO  | Task 908bf08e-0fd4-44c4-9d72-348091e3ba4d is in state SUCCESS 2025-09-19 00:59:48.493890 | orchestrator | 2025-09-19 00:59:48 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:48.495562 | orchestrator | 2025-09-19 00:59:48 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 00:59:48.497240 | orchestrator | 2025-09-19 00:59:48 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:48.497477 | orchestrator | 2025-09-19 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:51.551026 | orchestrator | 2025-09-19 00:59:51 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:51.553069 | orchestrator | 2025-09-19 00:59:51 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:51.554912 | orchestrator | 2025-09-19 00:59:51 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 00:59:51.556959 | orchestrator | 2025-09-19 00:59:51 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:51.556979 | orchestrator | 2025-09-19 00:59:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:54.602924 | orchestrator | 2025-09-19 00:59:54 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:54.604558 | orchestrator | 2025-09-19 00:59:54 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:54.605280 | orchestrator | 2025-09-19 00:59:54 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 00:59:54.607613 | orchestrator | 2025-09-19 00:59:54 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:54.607662 | orchestrator | 2025-09-19 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 00:59:57.652633 | orchestrator | 2025-09-19 00:59:57 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 00:59:57.652959 | orchestrator | 2025-09-19 00:59:57 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 00:59:57.653627 | orchestrator | 2025-09-19 00:59:57 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 00:59:57.654343 | orchestrator | 2025-09-19 00:59:57 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 00:59:57.654369 | orchestrator | 2025-09-19 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:00.716757 | orchestrator | 2025-09-19 01:00:00 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:00.716975 | orchestrator | 2025-09-19 01:00:00 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:00.717771 | orchestrator | 2025-09-19 01:00:00 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:00.718756 | orchestrator | 2025-09-19 01:00:00 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:00.719714 | orchestrator | 2025-09-19 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:03.763246 | orchestrator | 2025-09-19 01:00:03 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:03.763336 | orchestrator | 2025-09-19 01:00:03 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:03.763351 | orchestrator | 2025-09-19 01:00:03 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:03.763609 | orchestrator | 2025-09-19 01:00:03 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:03.763633 | orchestrator | 2025-09-19 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:06.801049 | orchestrator | 2025-09-19 01:00:06 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:06.801151 | orchestrator | 2025-09-19 01:00:06 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:06.801167 | orchestrator | 2025-09-19 01:00:06 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:06.801178 | orchestrator | 2025-09-19 01:00:06 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:06.801190 | orchestrator | 2025-09-19 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:09.840612 | orchestrator | 2025-09-19 01:00:09 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:09.840679 | orchestrator | 2025-09-19 01:00:09 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:09.840689 | orchestrator | 2025-09-19 01:00:09 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:09.840697 | orchestrator | 2025-09-19 01:00:09 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:09.840705 | orchestrator | 2025-09-19 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:12.863918 | orchestrator | 2025-09-19 01:00:12 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:12.866869 | orchestrator | 2025-09-19 01:00:12 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:12.869189 | orchestrator | 2025-09-19 01:00:12 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:12.871571 | orchestrator | 2025-09-19 01:00:12 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:12.871901 | orchestrator | 2025-09-19 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:15.922679 | orchestrator | 2025-09-19 01:00:15 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:15.922773 | orchestrator | 2025-09-19 01:00:15 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:15.922787 | orchestrator | 2025-09-19 01:00:15 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:15.922797 | orchestrator | 2025-09-19 01:00:15 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:15.922808 | orchestrator | 2025-09-19 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:18.934846 | orchestrator | 2025-09-19 01:00:18 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:18.934929 | orchestrator | 2025-09-19 01:00:18 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:18.935612 | orchestrator | 2025-09-19 01:00:18 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:18.936130 | orchestrator | 2025-09-19 01:00:18 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:18.936157 | orchestrator | 2025-09-19 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:21.960786 | orchestrator | 2025-09-19 01:00:21 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:21.961114 | orchestrator | 2025-09-19 01:00:21 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:21.961159 | orchestrator | 2025-09-19 01:00:21 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:21.961685 | orchestrator | 2025-09-19 01:00:21 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:21.961718 | orchestrator | 2025-09-19 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:24.992684 | orchestrator | 2025-09-19 01:00:24 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:24.993022 | orchestrator | 2025-09-19 01:00:24 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:24.993544 | orchestrator | 2025-09-19 01:00:24 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:24.994233 | orchestrator | 2025-09-19 01:00:24 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:24.994270 | orchestrator | 2025-09-19 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:28.017703 | orchestrator | 2025-09-19 01:00:28 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:28.017792 | orchestrator | 2025-09-19 01:00:28 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:28.018088 | orchestrator | 2025-09-19 01:00:28 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:28.018701 | orchestrator | 2025-09-19 01:00:28 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:28.018772 | orchestrator | 2025-09-19 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:31.066713 | orchestrator | 2025-09-19 01:00:31 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:31.067755 | orchestrator | 2025-09-19 01:00:31 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:31.069761 | orchestrator | 2025-09-19 01:00:31 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:31.071275 | orchestrator | 2025-09-19 01:00:31 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:31.071342 | orchestrator | 2025-09-19 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:34.111970 | orchestrator | 2025-09-19 01:00:34 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:34.113253 | orchestrator | 2025-09-19 01:00:34 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:34.115319 | orchestrator | 2025-09-19 01:00:34 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:34.117312 | orchestrator | 2025-09-19 01:00:34 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:34.117381 | orchestrator | 2025-09-19 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:37.150836 | orchestrator | 2025-09-19 01:00:37 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:37.152875 | orchestrator | 2025-09-19 01:00:37 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:37.155546 | orchestrator | 2025-09-19 01:00:37 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:37.158623 | orchestrator | 2025-09-19 01:00:37 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:37.159017 | orchestrator | 2025-09-19 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:40.205045 | orchestrator | 2025-09-19 01:00:40 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:40.205150 | orchestrator | 2025-09-19 01:00:40 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:40.205499 | orchestrator | 2025-09-19 01:00:40 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:40.206350 | orchestrator | 2025-09-19 01:00:40 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:40.206421 | orchestrator | 2025-09-19 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:43.240382 | orchestrator | 2025-09-19 01:00:43 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:43.242415 | orchestrator | 2025-09-19 01:00:43 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:43.244068 | orchestrator | 2025-09-19 01:00:43 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:43.247207 | orchestrator | 2025-09-19 01:00:43 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:43.248078 | orchestrator | 2025-09-19 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:46.290762 | orchestrator | 2025-09-19 01:00:46 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:46.293652 | orchestrator | 2025-09-19 01:00:46 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:46.297953 | orchestrator | 2025-09-19 01:00:46 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:46.300338 | orchestrator | 2025-09-19 01:00:46 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:46.300975 | orchestrator | 2025-09-19 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:49.347299 | orchestrator | 2025-09-19 01:00:49 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:49.348635 | orchestrator | 2025-09-19 01:00:49 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:49.350473 | orchestrator | 2025-09-19 01:00:49 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:49.352052 | orchestrator | 2025-09-19 01:00:49 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:49.352495 | orchestrator | 2025-09-19 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:52.374103 | orchestrator | 2025-09-19 01:00:52 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state STARTED 2025-09-19 01:00:52.374954 | orchestrator | 2025-09-19 01:00:52 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:52.375639 | orchestrator | 2025-09-19 01:00:52 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:52.376316 | orchestrator | 2025-09-19 01:00:52 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:52.376341 | orchestrator | 2025-09-19 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:55.419737 | orchestrator | 2025-09-19 01:00:55.419822 | orchestrator | 2025-09-19 01:00:55.419837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:00:55.419850 | orchestrator | 2025-09-19 01:00:55.419862 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:00:55.419874 | orchestrator | Friday 19 September 2025 00:59:44 +0000 (0:00:00.177) 0:00:00.177 ****** 2025-09-19 01:00:55.419911 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:00:55.419924 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:00:55.419935 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:00:55.419946 | orchestrator | 2025-09-19 01:00:55.419958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:00:55.419969 | orchestrator | Friday 19 September 2025 00:59:44 +0000 (0:00:00.311) 0:00:00.488 ****** 2025-09-19 01:00:55.419980 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 01:00:55.420014 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 01:00:55.420027 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 01:00:55.420038 | orchestrator | 2025-09-19 01:00:55.420049 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-19 01:00:55.420082 | orchestrator | 2025-09-19 01:00:55.420093 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-19 01:00:55.420121 | orchestrator | Friday 19 September 2025 00:59:44 +0000 (0:00:00.584) 0:00:01.073 ****** 2025-09-19 01:00:55.420133 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:00:55.420144 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:00:55.420155 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:00:55.420210 | orchestrator | 2025-09-19 01:00:55.420222 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:00:55.420234 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:00:55.420247 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:00:55.420258 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:00:55.420269 | orchestrator | 2025-09-19 01:00:55.420280 | orchestrator | 2025-09-19 01:00:55.420293 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:00:55.420306 | orchestrator | Friday 19 September 2025 00:59:45 +0000 (0:00:00.645) 0:00:01.718 ****** 2025-09-19 01:00:55.420318 | orchestrator | =============================================================================== 2025-09-19 01:00:55.420330 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.65s 2025-09-19 01:00:55.420343 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-09-19 01:00:55.420379 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-19 01:00:55.420411 | orchestrator | 2025-09-19 01:00:55.420424 | orchestrator | 2025-09-19 01:00:55.420437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:00:55.420450 | orchestrator | 2025-09-19 01:00:55.420462 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:00:55.420475 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.292) 0:00:00.292 ****** 2025-09-19 01:00:55.420487 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:00:55.420500 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:00:55.420513 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:00:55.420527 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:00:55.420539 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:00:55.420551 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:00:55.420563 | orchestrator | 2025-09-19 01:00:55.420576 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:00:55.420588 | orchestrator | Friday 19 September 2025 00:56:25 +0000 (0:00:00.730) 0:00:01.023 ****** 2025-09-19 01:00:55.420601 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-19 01:00:55.420614 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-19 01:00:55.420627 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-19 01:00:55.420640 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-19 01:00:55.420651 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-19 01:00:55.420661 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-19 01:00:55.420672 | orchestrator | 2025-09-19 01:00:55.420683 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-19 01:00:55.420694 | orchestrator | 2025-09-19 01:00:55.420704 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 01:00:55.420715 | orchestrator | Friday 19 September 2025 00:56:26 +0000 (0:00:00.691) 0:00:01.715 ****** 2025-09-19 01:00:55.420726 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:00:55.420738 | orchestrator | 2025-09-19 01:00:55.420749 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-19 01:00:55.420760 | orchestrator | Friday 19 September 2025 00:56:27 +0000 (0:00:00.940) 0:00:02.655 ****** 2025-09-19 01:00:55.420771 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:00:55.420781 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:00:55.420792 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:00:55.420803 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:00:55.420813 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:00:55.420824 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:00:55.420834 | orchestrator | 2025-09-19 01:00:55.420845 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-19 01:00:55.420856 | orchestrator | Friday 19 September 2025 00:56:28 +0000 (0:00:01.176) 0:00:03.831 ****** 2025-09-19 01:00:55.420867 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:00:55.420877 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:00:55.420888 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:00:55.420898 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:00:55.420909 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:00:55.420935 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:00:55.420946 | orchestrator | 2025-09-19 01:00:55.420957 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-19 01:00:55.420968 | orchestrator | Friday 19 September 2025 00:56:29 +0000 (0:00:01.065) 0:00:04.896 ****** 2025-09-19 01:00:55.420979 | orchestrator | ok: [testbed-node-0] => { 2025-09-19 01:00:55.420990 | orchestrator |  "changed": false, 2025-09-19 01:00:55.421000 | orchestrator |  "msg": "All assertions passed" 2025-09-19 01:00:55.421011 | orchestrator | } 2025-09-19 01:00:55.421023 | orchestrator | ok: [testbed-node-1] => { 2025-09-19 01:00:55.421042 | orchestrator |  "changed": false, 2025-09-19 01:00:55.421053 | orchestrator |  "msg": "All assertions passed" 2025-09-19 01:00:55.421063 | orchestrator | } 2025-09-19 01:00:55.421074 | orchestrator | ok: [testbed-node-2] => { 2025-09-19 01:00:55.421085 | orchestrator |  "changed": false, 2025-09-19 01:00:55.421096 | orchestrator |  "msg": "All assertions passed" 2025-09-19 01:00:55.421106 | orchestrator | } 2025-09-19 01:00:55.421117 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 01:00:55.421127 | orchestrator |  "changed": false, 2025-09-19 01:00:55.421138 | orchestrator |  "msg": "All assertions passed" 2025-09-19 01:00:55.421148 | orchestrator | } 2025-09-19 01:00:55.421159 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 01:00:55.421169 | orchestrator |  "changed": false, 2025-09-19 01:00:55.421180 | orchestrator |  "msg": "All assertions passed" 2025-09-19 01:00:55.421190 | orchestrator | } 2025-09-19 01:00:55.421201 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 01:00:55.421211 | orchestrator |  "changed": false, 2025-09-19 01:00:55.421222 | orchestrator |  "msg": "All assertions passed" 2025-09-19 01:00:55.421233 | orchestrator | } 2025-09-19 01:00:55.421243 | orchestrator | 2025-09-19 01:00:55.421254 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-19 01:00:55.421264 | orchestrator | Friday 19 September 2025 00:56:30 +0000 (0:00:00.726) 0:00:05.623 ****** 2025-09-19 01:00:55.421275 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.421286 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.421296 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.421307 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.421317 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.421328 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.421338 | orchestrator | 2025-09-19 01:00:55.421349 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-19 01:00:55.421360 | orchestrator | Friday 19 September 2025 00:56:30 +0000 (0:00:00.563) 0:00:06.187 ****** 2025-09-19 01:00:55.421371 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-19 01:00:55.421394 | orchestrator | 2025-09-19 01:00:55.421406 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-19 01:00:55.421416 | orchestrator | Friday 19 September 2025 00:56:34 +0000 (0:00:03.421) 0:00:09.608 ****** 2025-09-19 01:00:55.421427 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-19 01:00:55.421438 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-19 01:00:55.421449 | orchestrator | 2025-09-19 01:00:55.421460 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-19 01:00:55.421470 | orchestrator | Friday 19 September 2025 00:56:41 +0000 (0:00:06.716) 0:00:16.325 ****** 2025-09-19 01:00:55.421481 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:00:55.421492 | orchestrator | 2025-09-19 01:00:55.421502 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-19 01:00:55.421513 | orchestrator | Friday 19 September 2025 00:56:44 +0000 (0:00:03.701) 0:00:20.027 ****** 2025-09-19 01:00:55.421524 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:00:55.421534 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-19 01:00:55.421545 | orchestrator | 2025-09-19 01:00:55.421556 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-19 01:00:55.421566 | orchestrator | Friday 19 September 2025 00:56:48 +0000 (0:00:04.005) 0:00:24.032 ****** 2025-09-19 01:00:55.421577 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:00:55.421594 | orchestrator | 2025-09-19 01:00:55.421614 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-19 01:00:55.421647 | orchestrator | Friday 19 September 2025 00:56:52 +0000 (0:00:03.887) 0:00:27.919 ****** 2025-09-19 01:00:55.421667 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-19 01:00:55.421699 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-19 01:00:55.421718 | orchestrator | 2025-09-19 01:00:55.421736 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 01:00:55.421755 | orchestrator | Friday 19 September 2025 00:57:01 +0000 (0:00:08.617) 0:00:36.537 ****** 2025-09-19 01:00:55.421775 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.421795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.421816 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.421836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.421856 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.421876 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.421898 | orchestrator | 2025-09-19 01:00:55.421918 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-19 01:00:55.421934 | orchestrator | Friday 19 September 2025 00:57:02 +0000 (0:00:00.751) 0:00:37.289 ****** 2025-09-19 01:00:55.421945 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.421956 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.421967 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.421977 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.421988 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.421998 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.422009 | orchestrator | 2025-09-19 01:00:55.422082 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-19 01:00:55.422094 | orchestrator | Friday 19 September 2025 00:57:04 +0000 (0:00:02.637) 0:00:39.926 ****** 2025-09-19 01:00:55.422105 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:00:55.422115 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:00:55.422126 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:00:55.422137 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:00:55.422147 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:00:55.422171 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:00:55.422182 | orchestrator | 2025-09-19 01:00:55.422193 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 01:00:55.422203 | orchestrator | Friday 19 September 2025 00:57:07 +0000 (0:00:02.454) 0:00:42.381 ****** 2025-09-19 01:00:55.422214 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.422225 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.422236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.422247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.422258 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.422268 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.422279 | orchestrator | 2025-09-19 01:00:55.422290 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-19 01:00:55.422300 | orchestrator | Friday 19 September 2025 00:57:09 +0000 (0:00:02.692) 0:00:45.074 ****** 2025-09-19 01:00:55.422314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.422329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.422351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.422363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.422415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.422428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.422440 | orchestrator | 2025-09-19 01:00:55.422451 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-19 01:00:55.422469 | orchestrator | Friday 19 September 2025 00:57:13 +0000 (0:00:03.249) 0:00:48.323 ****** 2025-09-19 01:00:55.422480 | orchestrator | [WARNING]: Skipped 2025-09-19 01:00:55.422492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-19 01:00:55.422503 | orchestrator | due to this access issue: 2025-09-19 01:00:55.422514 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-19 01:00:55.422525 | orchestrator | a directory 2025-09-19 01:00:55.422536 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 01:00:55.422547 | orchestrator | 2025-09-19 01:00:55.422558 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 01:00:55.422568 | orchestrator | Friday 19 September 2025 00:57:13 +0000 (0:00:00.836) 0:00:49.160 ****** 2025-09-19 01:00:55.422579 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:00:55.422591 | orchestrator | 2025-09-19 01:00:55.422601 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-19 01:00:55.422612 | orchestrator | Friday 19 September 2025 00:57:15 +0000 (0:00:01.235) 0:00:50.395 ****** 2025-09-19 01:00:55.422623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.422642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.422654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.422672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.422684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.422696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.422707 | orchestrator | 2025-09-19 01:00:55.422718 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-19 01:00:55.422729 | orchestrator | Friday 19 September 2025 00:57:19 +0000 (0:00:04.082) 0:00:54.477 ****** 2025-09-19 01:00:55.422747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.422759 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.422770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.422788 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.422800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.422811 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.422822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.422833 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.422845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.422856 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.422974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.422998 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.423009 | orchestrator | 2025-09-19 01:00:55.423050 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-19 01:00:55.423062 | orchestrator | Friday 19 September 2025 00:57:22 +0000 (0:00:02.794) 0:00:57.272 ****** 2025-09-19 01:00:55.423073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.423084 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.423096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.423107 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.423118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423129 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.423148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.423166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.423177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423201 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.423211 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.423222 | orchestrator | 2025-09-19 01:00:55.423233 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-19 01:00:55.423244 | orchestrator | Friday 19 September 2025 00:57:25 +0000 (0:00:03.222) 0:01:00.494 ****** 2025-09-19 01:00:55.423255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.423265 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.423276 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.423287 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.423297 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.423308 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.423319 | orchestrator | 2025-09-19 01:00:55.423330 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-19 01:00:55.423340 | orchestrator | Friday 19 September 2025 00:57:29 +0000 (0:00:03.763) 0:01:04.257 ****** 2025-09-19 01:00:55.423351 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.423361 | orchestrator | 2025-09-19 01:00:55.423372 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-19 01:00:55.423403 | orchestrator | Friday 19 September 2025 00:57:29 +0000 (0:00:00.123) 0:01:04.381 ****** 2025-09-19 01:00:55.423415 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.423425 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.423436 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.423447 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.423457 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.423468 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.423478 | orchestrator | 2025-09-19 01:00:55.423489 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-19 01:00:55.423500 | orchestrator | Friday 19 September 2025 00:57:29 +0000 (0:00:00.558) 0:01:04.940 ****** 2025-09-19 01:00:55.423511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.423529 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.423547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.423559 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.423570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423581 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.423593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.423604 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.423615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423632 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.423649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423674 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.423707 | orchestrator | 2025-09-19 01:00:55.423719 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-19 01:00:55.423730 | orchestrator | Friday 19 September 2025 00:57:32 +0000 (0:00:02.798) 0:01:07.738 ****** 2025-09-19 01:00:55.423741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.423753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.423765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.423777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.423803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.423815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.423826 | orchestrator | 2025-09-19 01:00:55.423837 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-19 01:00:55.423848 | orchestrator | Friday 19 September 2025 00:57:37 +0000 (0:00:04.769) 0:01:12.507 ****** 2025-09-19 01:00:55.423860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.423871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.423889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.423907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.423920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.423931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.423942 | orchestrator | 2025-09-19 01:00:55.423954 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-19 01:00:55.423965 | orchestrator | Friday 19 September 2025 00:57:43 +0000 (0:00:05.952) 0:01:18.460 ****** 2025-09-19 01:00:55.423976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.423999 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.424034 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.424065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.424077 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.424106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.424117 | orchestrator | 2025-09-19 01:00:55.424133 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-19 01:00:55.424145 | orchestrator | Friday 19 September 2025 00:57:45 +0000 (0:00:02.616) 0:01:21.076 ****** 2025-09-19 01:00:55.424156 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424166 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424177 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424188 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:00:55.424198 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:00:55.424209 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:00:55.424219 | orchestrator | 2025-09-19 01:00:55.424230 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-19 01:00:55.424241 | orchestrator | Friday 19 September 2025 00:57:48 +0000 (0:00:02.790) 0:01:23.866 ****** 2025-09-19 01:00:55.424258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.424271 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.424293 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.424321 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.424355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.424367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.424379 | orchestrator | 2025-09-19 01:00:55.424407 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-19 01:00:55.424417 | orchestrator | Friday 19 September 2025 00:57:53 +0000 (0:00:04.511) 0:01:28.378 ****** 2025-09-19 01:00:55.424428 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.424439 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.424449 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.424460 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424471 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424481 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424501 | orchestrator | 2025-09-19 01:00:55.424512 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-19 01:00:55.424523 | orchestrator | Friday 19 September 2025 00:57:54 +0000 (0:00:01.846) 0:01:30.225 ****** 2025-09-19 01:00:55.424534 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.424544 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.424555 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.424566 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424576 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424587 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424598 | orchestrator | 2025-09-19 01:00:55.424609 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-19 01:00:55.424619 | orchestrator | Friday 19 September 2025 00:57:57 +0000 (0:00:02.270) 0:01:32.495 ****** 2025-09-19 01:00:55.424630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.424641 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.424652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424662 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.424673 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424683 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424694 | orchestrator | 2025-09-19 01:00:55.424704 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-19 01:00:55.424715 | orchestrator | Friday 19 September 2025 00:57:59 +0000 (0:00:01.868) 0:01:34.364 ****** 2025-09-19 01:00:55.424726 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.424737 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.424747 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.424757 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424768 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424779 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424789 | orchestrator | 2025-09-19 01:00:55.424800 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-19 01:00:55.424810 | orchestrator | Friday 19 September 2025 00:58:02 +0000 (0:00:03.103) 0:01:37.467 ****** 2025-09-19 01:00:55.424821 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.424832 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.424842 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.424853 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424863 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424874 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424885 | orchestrator | 2025-09-19 01:00:55.424895 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-19 01:00:55.424906 | orchestrator | Friday 19 September 2025 00:58:03 +0000 (0:00:01.727) 0:01:39.195 ****** 2025-09-19 01:00:55.424917 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.424927 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.424938 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.424949 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.424959 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.424970 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.424980 | orchestrator | 2025-09-19 01:00:55.424991 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-19 01:00:55.425002 | orchestrator | Friday 19 September 2025 00:58:06 +0000 (0:00:02.331) 0:01:41.526 ****** 2025-09-19 01:00:55.425018 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 01:00:55.425030 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.425041 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 01:00:55.425065 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.425076 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 01:00:55.425087 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.425103 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 01:00:55.425114 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.425126 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 01:00:55.425143 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.425154 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 01:00:55.425165 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.425175 | orchestrator | 2025-09-19 01:00:55.425186 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-19 01:00:55.425197 | orchestrator | Friday 19 September 2025 00:58:08 +0000 (0:00:02.705) 0:01:44.232 ****** 2025-09-19 01:00:55.425208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.425220 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.425231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.425242 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.425253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.425264 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.425281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.425298 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.425653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.425675 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.425686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.425698 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.425708 | orchestrator | 2025-09-19 01:00:55.425719 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-19 01:00:55.425730 | orchestrator | Friday 19 September 2025 00:58:10 +0000 (0:00:01.898) 0:01:46.130 ****** 2025-09-19 01:00:55.425741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.425753 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.425764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.425789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.425808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.425820 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.425831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.425842 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.425853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.425865 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.425876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.425887 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.425908 | orchestrator | 2025-09-19 01:00:55.425919 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-19 01:00:55.425929 | orchestrator | Friday 19 September 2025 00:58:12 +0000 (0:00:01.947) 0:01:48.078 ****** 2025-09-19 01:00:55.425940 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.425951 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.425962 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.425972 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.425983 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.425993 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426004 | orchestrator | 2025-09-19 01:00:55.426041 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-19 01:00:55.426055 | orchestrator | Friday 19 September 2025 00:58:15 +0000 (0:00:02.660) 0:01:50.738 ****** 2025-09-19 01:00:55.426066 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426077 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426087 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426098 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:00:55.426114 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:00:55.426124 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:00:55.426135 | orchestrator | 2025-09-19 01:00:55.426146 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-09-19 01:00:55.426157 | orchestrator | Friday 19 September 2025 00:58:19 +0000 (0:00:04.408) 0:01:55.146 ****** 2025-09-19 01:00:55.426168 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426178 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426189 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426200 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426210 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426221 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426232 | orchestrator | 2025-09-19 01:00:55.426243 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-19 01:00:55.426260 | orchestrator | Friday 19 September 2025 00:58:21 +0000 (0:00:02.038) 0:01:57.185 ****** 2025-09-19 01:00:55.426271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426297 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426309 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426321 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426334 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426347 | orchestrator | 2025-09-19 01:00:55.426360 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-19 01:00:55.426372 | orchestrator | Friday 19 September 2025 00:58:23 +0000 (0:00:01.988) 0:01:59.174 ****** 2025-09-19 01:00:55.426438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426452 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426464 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426476 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426488 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426501 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426514 | orchestrator | 2025-09-19 01:00:55.426526 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-19 01:00:55.426538 | orchestrator | Friday 19 September 2025 00:58:27 +0000 (0:00:03.174) 0:02:02.348 ****** 2025-09-19 01:00:55.426551 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426573 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426583 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426592 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426602 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426611 | orchestrator | 2025-09-19 01:00:55.426621 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-19 01:00:55.426630 | orchestrator | Friday 19 September 2025 00:58:29 +0000 (0:00:02.486) 0:02:04.835 ****** 2025-09-19 01:00:55.426652 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426661 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426671 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426689 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426699 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426708 | orchestrator | 2025-09-19 01:00:55.426718 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-19 01:00:55.426727 | orchestrator | Friday 19 September 2025 00:58:31 +0000 (0:00:01.909) 0:02:06.745 ****** 2025-09-19 01:00:55.426736 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426746 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426755 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426765 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426774 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426783 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426793 | orchestrator | 2025-09-19 01:00:55.426803 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-19 01:00:55.426812 | orchestrator | Friday 19 September 2025 00:58:33 +0000 (0:00:01.842) 0:02:08.587 ****** 2025-09-19 01:00:55.426822 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426831 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426841 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426850 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426859 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426869 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426878 | orchestrator | 2025-09-19 01:00:55.426888 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-19 01:00:55.426897 | orchestrator | Friday 19 September 2025 00:58:35 +0000 (0:00:02.128) 0:02:10.716 ****** 2025-09-19 01:00:55.426907 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.426916 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.426925 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.426935 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.426944 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.426954 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.426963 | orchestrator | 2025-09-19 01:00:55.426973 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-19 01:00:55.426982 | orchestrator | Friday 19 September 2025 00:58:38 +0000 (0:00:02.731) 0:02:13.447 ****** 2025-09-19 01:00:55.426992 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 01:00:55.427002 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.427012 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 01:00:55.427021 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 01:00:55.427031 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.427040 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.427050 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 01:00:55.427060 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.427069 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 01:00:55.427079 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.427093 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 01:00:55.427103 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.427117 | orchestrator | 2025-09-19 01:00:55.427126 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-19 01:00:55.427136 | orchestrator | Friday 19 September 2025 00:58:40 +0000 (0:00:01.973) 0:02:15.421 ****** 2025-09-19 01:00:55.427159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.427170 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.427180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.427191 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.427200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 01:00:55.427210 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.427220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.427230 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.427277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.427294 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.427304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 01:00:55.427314 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.427324 | orchestrator | 2025-09-19 01:00:55.427333 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-19 01:00:55.427343 | orchestrator | Friday 19 September 2025 00:58:42 +0000 (0:00:01.888) 0:02:17.309 ****** 2025-09-19 01:00:55.427353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.427363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.427378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.427414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 01:00:55.427425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.427435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 01:00:55.427445 | orchestrator | 2025-09-19 01:00:55.427455 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 01:00:55.427464 | orchestrator | Friday 19 September 2025 00:58:44 +0000 (0:00:02.527) 0:02:19.836 ****** 2025-09-19 01:00:55.427474 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:00:55.427484 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:00:55.427493 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:00:55.427503 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:00:55.427512 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:00:55.427522 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:00:55.427531 | orchestrator | 2025-09-19 01:00:55.427541 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-19 01:00:55.427551 | orchestrator | Friday 19 September 2025 00:58:45 +0000 (0:00:00.687) 0:02:20.524 ****** 2025-09-19 01:00:55.427560 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:00:55.427570 | orchestrator | 2025-09-19 01:00:55.427579 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-19 01:00:55.427589 | orchestrator | Friday 19 September 2025 00:58:47 +0000 (0:00:02.288) 0:02:22.812 ****** 2025-09-19 01:00:55.427604 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:00:55.427614 | orchestrator | 2025-09-19 01:00:55.427623 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-19 01:00:55.427633 | orchestrator | Friday 19 September 2025 00:58:49 +0000 (0:00:02.201) 0:02:25.013 ****** 2025-09-19 01:00:55.427642 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:00:55.427652 | orchestrator | 2025-09-19 01:00:55.427661 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 01:00:55.427671 | orchestrator | Friday 19 September 2025 00:59:31 +0000 (0:00:41.729) 0:03:06.743 ****** 2025-09-19 01:00:55.427681 | orchestrator | 2025-09-19 01:00:55.427690 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 01:00:55.427700 | orchestrator | Friday 19 September 2025 00:59:31 +0000 (0:00:00.067) 0:03:06.810 ****** 2025-09-19 01:00:55.427709 | orchestrator | 2025-09-19 01:00:55.427719 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 01:00:55.427728 | orchestrator | Friday 19 September 2025 00:59:31 +0000 (0:00:00.079) 0:03:06.890 ****** 2025-09-19 01:00:55.427738 | orchestrator | 2025-09-19 01:00:55.427747 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 01:00:55.427761 | orchestrator | Friday 19 September 2025 00:59:31 +0000 (0:00:00.069) 0:03:06.959 ****** 2025-09-19 01:00:55.427771 | orchestrator | 2025-09-19 01:00:55.427781 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 01:00:55.427790 | orchestrator | Friday 19 September 2025 00:59:31 +0000 (0:00:00.234) 0:03:07.194 ****** 2025-09-19 01:00:55.427800 | orchestrator | 2025-09-19 01:00:55.427809 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 01:00:55.427819 | orchestrator | Friday 19 September 2025 00:59:32 +0000 (0:00:00.064) 0:03:07.258 ****** 2025-09-19 01:00:55.427828 | orchestrator | 2025-09-19 01:00:55.427838 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-19 01:00:55.427848 | orchestrator | Friday 19 September 2025 00:59:32 +0000 (0:00:00.069) 0:03:07.327 ****** 2025-09-19 01:00:55.427857 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:00:55.427872 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:00:55.427882 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:00:55.427891 | orchestrator | 2025-09-19 01:00:55.427901 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-19 01:00:55.427911 | orchestrator | Friday 19 September 2025 00:59:58 +0000 (0:00:26.680) 0:03:34.008 ****** 2025-09-19 01:00:55.427920 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:00:55.427930 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:00:55.427939 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:00:55.427949 | orchestrator | 2025-09-19 01:00:55.427958 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:00:55.427968 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 01:00:55.427979 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 01:00:55.427989 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 01:00:55.427998 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 01:00:55.428008 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 01:00:55.428018 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 01:00:55.428027 | orchestrator | 2025-09-19 01:00:55.428043 | orchestrator | 2025-09-19 01:00:55.428052 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:00:55.428062 | orchestrator | Friday 19 September 2025 01:00:53 +0000 (0:00:54.276) 0:04:28.285 ****** 2025-09-19 01:00:55.428072 | orchestrator | =============================================================================== 2025-09-19 01:00:55.428081 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 54.28s 2025-09-19 01:00:55.428091 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.73s 2025-09-19 01:00:55.428100 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.68s 2025-09-19 01:00:55.428110 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.62s 2025-09-19 01:00:55.428119 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.72s 2025-09-19 01:00:55.428129 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.95s 2025-09-19 01:00:55.428138 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.77s 2025-09-19 01:00:55.428147 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.51s 2025-09-19 01:00:55.428157 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.41s 2025-09-19 01:00:55.428166 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.08s 2025-09-19 01:00:55.428176 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.01s 2025-09-19 01:00:55.428185 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.89s 2025-09-19 01:00:55.428195 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.76s 2025-09-19 01:00:55.428204 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.70s 2025-09-19 01:00:55.428214 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.42s 2025-09-19 01:00:55.428223 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.25s 2025-09-19 01:00:55.428233 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.22s 2025-09-19 01:00:55.428242 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.18s 2025-09-19 01:00:55.428251 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.10s 2025-09-19 01:00:55.428261 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.80s 2025-09-19 01:00:55.428270 | orchestrator | 2025-09-19 01:00:55 | INFO  | Task b6dfb3f9-adff-4bea-91dd-0c94007a7c9e is in state SUCCESS 2025-09-19 01:00:55.428280 | orchestrator | 2025-09-19 01:00:55 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:00:55.428294 | orchestrator | 2025-09-19 01:00:55 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:55.428304 | orchestrator | 2025-09-19 01:00:55 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:55.428313 | orchestrator | 2025-09-19 01:00:55 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:55.428323 | orchestrator | 2025-09-19 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:00:58.463256 | orchestrator | 2025-09-19 01:00:58 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:00:58.463342 | orchestrator | 2025-09-19 01:00:58 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:00:58.465562 | orchestrator | 2025-09-19 01:00:58 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:00:58.466243 | orchestrator | 2025-09-19 01:00:58 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:00:58.466279 | orchestrator | 2025-09-19 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:01.500300 | orchestrator | 2025-09-19 01:01:01 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:01.501885 | orchestrator | 2025-09-19 01:01:01 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:01:01.502226 | orchestrator | 2025-09-19 01:01:01 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:01.502776 | orchestrator | 2025-09-19 01:01:01 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:01.502931 | orchestrator | 2025-09-19 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:04.535350 | orchestrator | 2025-09-19 01:01:04 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:04.535483 | orchestrator | 2025-09-19 01:01:04 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:01:04.535604 | orchestrator | 2025-09-19 01:01:04 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:04.536364 | orchestrator | 2025-09-19 01:01:04 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:04.536412 | orchestrator | 2025-09-19 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:07.579629 | orchestrator | 2025-09-19 01:01:07 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:07.581039 | orchestrator | 2025-09-19 01:01:07 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:01:07.584640 | orchestrator | 2025-09-19 01:01:07 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:07.586123 | orchestrator | 2025-09-19 01:01:07 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:07.586168 | orchestrator | 2025-09-19 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:10.612990 | orchestrator | 2025-09-19 01:01:10 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:10.613069 | orchestrator | 2025-09-19 01:01:10 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:01:10.613343 | orchestrator | 2025-09-19 01:01:10 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:10.613631 | orchestrator | 2025-09-19 01:01:10 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:10.613655 | orchestrator | 2025-09-19 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:13.652189 | orchestrator | 2025-09-19 01:01:13 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:13.654290 | orchestrator | 2025-09-19 01:01:13 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:01:13.657021 | orchestrator | 2025-09-19 01:01:13 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:13.659436 | orchestrator | 2025-09-19 01:01:13 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:13.659575 | orchestrator | 2025-09-19 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:16.703598 | orchestrator | 2025-09-19 01:01:16 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:16.705355 | orchestrator | 2025-09-19 01:01:16 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state STARTED 2025-09-19 01:01:16.707489 | orchestrator | 2025-09-19 01:01:16 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:16.708981 | orchestrator | 2025-09-19 01:01:16 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:16.709035 | orchestrator | 2025-09-19 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:19.754822 | orchestrator | 2025-09-19 01:01:19 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:19.756235 | orchestrator | 2025-09-19 01:01:19 | INFO  | Task 7717513a-3a16-410c-8836-9ecfee5a3954 is in state SUCCESS 2025-09-19 01:01:19.758359 | orchestrator | 2025-09-19 01:01:19.758526 | orchestrator | 2025-09-19 01:01:19.758542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:01:19.758554 | orchestrator | 2025-09-19 01:01:19.758565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:01:19.758577 | orchestrator | Friday 19 September 2025 00:59:24 +0000 (0:00:00.232) 0:00:00.232 ****** 2025-09-19 01:01:19.758588 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:01:19.758601 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:01:19.758611 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:01:19.758622 | orchestrator | 2025-09-19 01:01:19.758633 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:01:19.758644 | orchestrator | Friday 19 September 2025 00:59:24 +0000 (0:00:00.271) 0:00:00.503 ****** 2025-09-19 01:01:19.758655 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-19 01:01:19.758667 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-19 01:01:19.758677 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-19 01:01:19.758688 | orchestrator | 2025-09-19 01:01:19.758699 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-19 01:01:19.758710 | orchestrator | 2025-09-19 01:01:19.758721 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 01:01:19.758732 | orchestrator | Friday 19 September 2025 00:59:24 +0000 (0:00:00.336) 0:00:00.839 ****** 2025-09-19 01:01:19.758743 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:01:19.758755 | orchestrator | 2025-09-19 01:01:19.758766 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-19 01:01:19.758777 | orchestrator | Friday 19 September 2025 00:59:25 +0000 (0:00:00.454) 0:00:01.294 ****** 2025-09-19 01:01:19.758788 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-19 01:01:19.758799 | orchestrator | 2025-09-19 01:01:19.758811 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-19 01:01:19.758822 | orchestrator | Friday 19 September 2025 00:59:29 +0000 (0:00:03.720) 0:00:05.014 ****** 2025-09-19 01:01:19.758832 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-19 01:01:19.758845 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-19 01:01:19.758857 | orchestrator | 2025-09-19 01:01:19.758868 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-19 01:01:19.758879 | orchestrator | Friday 19 September 2025 00:59:35 +0000 (0:00:06.468) 0:00:11.482 ****** 2025-09-19 01:01:19.758890 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:01:19.758901 | orchestrator | 2025-09-19 01:01:19.758912 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-19 01:01:19.758923 | orchestrator | Friday 19 September 2025 00:59:38 +0000 (0:00:03.220) 0:00:14.702 ****** 2025-09-19 01:01:19.758934 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:01:19.758945 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-19 01:01:19.758956 | orchestrator | 2025-09-19 01:01:19.758967 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-19 01:01:19.758978 | orchestrator | Friday 19 September 2025 00:59:42 +0000 (0:00:04.110) 0:00:18.813 ****** 2025-09-19 01:01:19.759016 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:01:19.759029 | orchestrator | 2025-09-19 01:01:19.759042 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-19 01:01:19.759054 | orchestrator | Friday 19 September 2025 00:59:46 +0000 (0:00:03.388) 0:00:22.202 ****** 2025-09-19 01:01:19.759067 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-19 01:01:19.759080 | orchestrator | 2025-09-19 01:01:19.759093 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-19 01:01:19.759106 | orchestrator | Friday 19 September 2025 00:59:50 +0000 (0:00:03.792) 0:00:25.994 ****** 2025-09-19 01:01:19.759119 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.759132 | orchestrator | 2025-09-19 01:01:19.759146 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-19 01:01:19.759158 | orchestrator | Friday 19 September 2025 00:59:53 +0000 (0:00:03.586) 0:00:29.580 ****** 2025-09-19 01:01:19.759172 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.759184 | orchestrator | 2025-09-19 01:01:19.759197 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-19 01:01:19.759211 | orchestrator | Friday 19 September 2025 00:59:57 +0000 (0:00:04.317) 0:00:33.898 ****** 2025-09-19 01:01:19.759224 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.759235 | orchestrator | 2025-09-19 01:01:19.759246 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-19 01:01:19.759258 | orchestrator | Friday 19 September 2025 01:00:01 +0000 (0:00:03.823) 0:00:37.722 ****** 2025-09-19 01:01:19.759308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759438 | orchestrator | 2025-09-19 01:01:19.759449 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-19 01:01:19.759460 | orchestrator | Friday 19 September 2025 01:00:03 +0000 (0:00:01.758) 0:00:39.480 ****** 2025-09-19 01:01:19.759472 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:01:19.759483 | orchestrator | 2025-09-19 01:01:19.759494 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-19 01:01:19.759505 | orchestrator | Friday 19 September 2025 01:00:03 +0000 (0:00:00.219) 0:00:39.699 ****** 2025-09-19 01:01:19.759516 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:01:19.759527 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:01:19.759538 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:01:19.759549 | orchestrator | 2025-09-19 01:01:19.759560 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-19 01:01:19.759571 | orchestrator | Friday 19 September 2025 01:00:04 +0000 (0:00:00.822) 0:00:40.522 ****** 2025-09-19 01:01:19.759582 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 01:01:19.759593 | orchestrator | 2025-09-19 01:01:19.759604 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-19 01:01:19.759615 | orchestrator | Friday 19 September 2025 01:00:05 +0000 (0:00:01.181) 0:00:41.703 ****** 2025-09-19 01:01:19.759627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759729 | orchestrator | 2025-09-19 01:01:19.759740 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-19 01:01:19.759751 | orchestrator | Friday 19 September 2025 01:00:09 +0000 (0:00:03.243) 0:00:44.947 ****** 2025-09-19 01:01:19.759762 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:01:19.759773 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:01:19.759784 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:01:19.759795 | orchestrator | 2025-09-19 01:01:19.759806 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 01:01:19.759817 | orchestrator | Friday 19 September 2025 01:00:09 +0000 (0:00:00.475) 0:00:45.422 ****** 2025-09-19 01:01:19.759829 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:01:19.759840 | orchestrator | 2025-09-19 01:01:19.759851 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-19 01:01:19.759862 | orchestrator | Friday 19 September 2025 01:00:10 +0000 (0:00:00.871) 0:00:46.294 ****** 2025-09-19 01:01:19.759879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.759932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.759974 | orchestrator | 2025-09-19 01:01:19.759985 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-19 01:01:19.759996 | orchestrator | Friday 19 September 2025 01:00:13 +0000 (0:00:02.707) 0:00:49.002 ****** 2025-09-19 01:01:19.760016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760049 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:01:19.760061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760085 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:01:19.760102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760141 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:01:19.760153 | orchestrator | 2025-09-19 01:01:19.760164 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-19 01:01:19.760175 | orchestrator | Friday 19 September 2025 01:00:13 +0000 (0:00:00.617) 0:00:49.619 ****** 2025-09-19 01:01:19.760186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760211 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:01:19.760222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:01:19.760287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760311 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:01:19.760322 | orchestrator | 2025-09-19 01:01:19.760333 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-19 01:01:19.760344 | orchestrator | Friday 19 September 2025 01:00:14 +0000 (0:00:01.025) 0:00:50.645 ****** 2025-09-19 01:01:19.760355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760485 | orchestrator | 2025-09-19 01:01:19.760496 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-19 01:01:19.760507 | orchestrator | Friday 19 September 2025 01:00:18 +0000 (0:00:03.505) 0:00:54.150 ****** 2025-09-19 01:01:19.760524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760624 | orchestrator | 2025-09-19 01:01:19.760635 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-19 01:01:19.760652 | orchestrator | Friday 19 September 2025 01:00:27 +0000 (0:00:09.017) 0:01:03.168 ****** 2025-09-19 01:01:19.760664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760687 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:01:19.760699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760730 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:01:19.760755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 01:01:19.760767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:01:19.760778 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:01:19.760789 | orchestrator | 2025-09-19 01:01:19.760800 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-19 01:01:19.760812 | orchestrator | Friday 19 September 2025 01:00:28 +0000 (0:00:01.389) 0:01:04.557 ****** 2025-09-19 01:01:19.760823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 01:01:19.760876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:01:19.760911 | orchestrator | 2025-09-19 01:01:19.760922 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 01:01:19.760934 | orchestrator | Friday 19 September 2025 01:00:30 +0000 (0:00:01.969) 0:01:06.526 ****** 2025-09-19 01:01:19.760945 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:01:19.760956 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:01:19.760967 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:01:19.760977 | orchestrator | 2025-09-19 01:01:19.760995 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-19 01:01:19.761006 | orchestrator | Friday 19 September 2025 01:00:30 +0000 (0:00:00.210) 0:01:06.737 ****** 2025-09-19 01:01:19.761016 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.761027 | orchestrator | 2025-09-19 01:01:19.761038 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-19 01:01:19.761048 | orchestrator | Friday 19 September 2025 01:00:32 +0000 (0:00:02.082) 0:01:08.819 ****** 2025-09-19 01:01:19.761059 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.761069 | orchestrator | 2025-09-19 01:01:19.761081 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-19 01:01:19.761092 | orchestrator | Friday 19 September 2025 01:00:35 +0000 (0:00:02.117) 0:01:10.937 ****** 2025-09-19 01:01:19.761102 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.761113 | orchestrator | 2025-09-19 01:01:19.761124 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 01:01:19.761135 | orchestrator | Friday 19 September 2025 01:00:50 +0000 (0:00:15.092) 0:01:26.029 ****** 2025-09-19 01:01:19.761146 | orchestrator | 2025-09-19 01:01:19.761156 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 01:01:19.761167 | orchestrator | Friday 19 September 2025 01:00:50 +0000 (0:00:00.060) 0:01:26.089 ****** 2025-09-19 01:01:19.761178 | orchestrator | 2025-09-19 01:01:19.761189 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 01:01:19.761205 | orchestrator | Friday 19 September 2025 01:00:50 +0000 (0:00:00.058) 0:01:26.148 ****** 2025-09-19 01:01:19.761217 | orchestrator | 2025-09-19 01:01:19.761227 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-19 01:01:19.761238 | orchestrator | Friday 19 September 2025 01:00:50 +0000 (0:00:00.059) 0:01:26.208 ****** 2025-09-19 01:01:19.761249 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.761260 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:01:19.761271 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:01:19.761281 | orchestrator | 2025-09-19 01:01:19.761292 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-19 01:01:19.761303 | orchestrator | Friday 19 September 2025 01:01:07 +0000 (0:00:17.679) 0:01:43.887 ****** 2025-09-19 01:01:19.761314 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:01:19.761325 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:01:19.761335 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:01:19.761346 | orchestrator | 2025-09-19 01:01:19.761389 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:01:19.761404 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 01:01:19.761416 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 01:01:19.761427 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 01:01:19.761438 | orchestrator | 2025-09-19 01:01:19.761449 | orchestrator | 2025-09-19 01:01:19.761460 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:01:19.761470 | orchestrator | Friday 19 September 2025 01:01:18 +0000 (0:00:10.418) 0:01:54.306 ****** 2025-09-19 01:01:19.761481 | orchestrator | =============================================================================== 2025-09-19 01:01:19.761492 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.68s 2025-09-19 01:01:19.761503 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.09s 2025-09-19 01:01:19.761514 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.42s 2025-09-19 01:01:19.761525 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.02s 2025-09-19 01:01:19.761543 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.47s 2025-09-19 01:01:19.761554 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.32s 2025-09-19 01:01:19.761565 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.11s 2025-09-19 01:01:19.761576 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.82s 2025-09-19 01:01:19.761587 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2025-09-19 01:01:19.761598 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.72s 2025-09-19 01:01:19.761609 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.59s 2025-09-19 01:01:19.761620 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.51s 2025-09-19 01:01:19.761631 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.39s 2025-09-19 01:01:19.761642 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.24s 2025-09-19 01:01:19.761652 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.22s 2025-09-19 01:01:19.761663 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.71s 2025-09-19 01:01:19.761674 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.12s 2025-09-19 01:01:19.761685 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.08s 2025-09-19 01:01:19.761695 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.97s 2025-09-19 01:01:19.761706 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.76s 2025-09-19 01:01:19.761717 | orchestrator | 2025-09-19 01:01:19 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:19.761728 | orchestrator | 2025-09-19 01:01:19 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:19.761739 | orchestrator | 2025-09-19 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:22.793803 | orchestrator | 2025-09-19 01:01:22 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:22.793970 | orchestrator | 2025-09-19 01:01:22 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:22.793996 | orchestrator | 2025-09-19 01:01:22 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:22.795201 | orchestrator | 2025-09-19 01:01:22 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:22.795229 | orchestrator | 2025-09-19 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:25.832444 | orchestrator | 2025-09-19 01:01:25 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:25.832856 | orchestrator | 2025-09-19 01:01:25 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:25.834853 | orchestrator | 2025-09-19 01:01:25 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:25.838616 | orchestrator | 2025-09-19 01:01:25 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:25.838707 | orchestrator | 2025-09-19 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:28.875534 | orchestrator | 2025-09-19 01:01:28 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state STARTED 2025-09-19 01:01:28.876009 | orchestrator | 2025-09-19 01:01:28 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:28.877264 | orchestrator | 2025-09-19 01:01:28 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:28.878233 | orchestrator | 2025-09-19 01:01:28 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:28.878991 | orchestrator | 2025-09-19 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:31.928542 | orchestrator | 2025-09-19 01:01:31 | INFO  | Task 8c703420-900d-405c-a046-e72964c09359 is in state SUCCESS 2025-09-19 01:01:31.928649 | orchestrator | 2025-09-19 01:01:31 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:31.928672 | orchestrator | 2025-09-19 01:01:31 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:31.928690 | orchestrator | 2025-09-19 01:01:31 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:31.928708 | orchestrator | 2025-09-19 01:01:31 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:31.928726 | orchestrator | 2025-09-19 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:34.959489 | orchestrator | 2025-09-19 01:01:34 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:34.961422 | orchestrator | 2025-09-19 01:01:34 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:34.961991 | orchestrator | 2025-09-19 01:01:34 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:34.963649 | orchestrator | 2025-09-19 01:01:34 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:34.963681 | orchestrator | 2025-09-19 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:38.000754 | orchestrator | 2025-09-19 01:01:38 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:38.003071 | orchestrator | 2025-09-19 01:01:38 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:38.003099 | orchestrator | 2025-09-19 01:01:38 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:38.003345 | orchestrator | 2025-09-19 01:01:38 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:38.003396 | orchestrator | 2025-09-19 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:41.046669 | orchestrator | 2025-09-19 01:01:41 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:41.049395 | orchestrator | 2025-09-19 01:01:41 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:41.051890 | orchestrator | 2025-09-19 01:01:41 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:41.054447 | orchestrator | 2025-09-19 01:01:41 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:41.054493 | orchestrator | 2025-09-19 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:44.092585 | orchestrator | 2025-09-19 01:01:44 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:44.092964 | orchestrator | 2025-09-19 01:01:44 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:44.094196 | orchestrator | 2025-09-19 01:01:44 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:44.095125 | orchestrator | 2025-09-19 01:01:44 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:44.095152 | orchestrator | 2025-09-19 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:47.141693 | orchestrator | 2025-09-19 01:01:47 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:47.143146 | orchestrator | 2025-09-19 01:01:47 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:47.144702 | orchestrator | 2025-09-19 01:01:47 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:47.147773 | orchestrator | 2025-09-19 01:01:47 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:47.147859 | orchestrator | 2025-09-19 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:50.184236 | orchestrator | 2025-09-19 01:01:50 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:50.185331 | orchestrator | 2025-09-19 01:01:50 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:50.186896 | orchestrator | 2025-09-19 01:01:50 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:50.188634 | orchestrator | 2025-09-19 01:01:50 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:50.188677 | orchestrator | 2025-09-19 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:53.224826 | orchestrator | 2025-09-19 01:01:53 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:53.228485 | orchestrator | 2025-09-19 01:01:53 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:53.228520 | orchestrator | 2025-09-19 01:01:53 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:53.228532 | orchestrator | 2025-09-19 01:01:53 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:53.228544 | orchestrator | 2025-09-19 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:56.283403 | orchestrator | 2025-09-19 01:01:56 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:56.286244 | orchestrator | 2025-09-19 01:01:56 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:56.288111 | orchestrator | 2025-09-19 01:01:56 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:56.289752 | orchestrator | 2025-09-19 01:01:56 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:56.290182 | orchestrator | 2025-09-19 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:01:59.318236 | orchestrator | 2025-09-19 01:01:59 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:01:59.318429 | orchestrator | 2025-09-19 01:01:59 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:01:59.318453 | orchestrator | 2025-09-19 01:01:59 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:01:59.319123 | orchestrator | 2025-09-19 01:01:59 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:01:59.320385 | orchestrator | 2025-09-19 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:02.357760 | orchestrator | 2025-09-19 01:02:02 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:02.359786 | orchestrator | 2025-09-19 01:02:02 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:02.361145 | orchestrator | 2025-09-19 01:02:02 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:02.362837 | orchestrator | 2025-09-19 01:02:02 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:02.364254 | orchestrator | 2025-09-19 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:05.392825 | orchestrator | 2025-09-19 01:02:05 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:05.394191 | orchestrator | 2025-09-19 01:02:05 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:05.395373 | orchestrator | 2025-09-19 01:02:05 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:05.396572 | orchestrator | 2025-09-19 01:02:05 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:05.396609 | orchestrator | 2025-09-19 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:08.427474 | orchestrator | 2025-09-19 01:02:08 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:08.427609 | orchestrator | 2025-09-19 01:02:08 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:08.427625 | orchestrator | 2025-09-19 01:02:08 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:08.427681 | orchestrator | 2025-09-19 01:02:08 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:08.427696 | orchestrator | 2025-09-19 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:11.464924 | orchestrator | 2025-09-19 01:02:11 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:11.465051 | orchestrator | 2025-09-19 01:02:11 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:11.465550 | orchestrator | 2025-09-19 01:02:11 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:11.468644 | orchestrator | 2025-09-19 01:02:11 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:11.468668 | orchestrator | 2025-09-19 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:14.498246 | orchestrator | 2025-09-19 01:02:14 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:14.498351 | orchestrator | 2025-09-19 01:02:14 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:14.499899 | orchestrator | 2025-09-19 01:02:14 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:14.502209 | orchestrator | 2025-09-19 01:02:14 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:14.502385 | orchestrator | 2025-09-19 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:17.574691 | orchestrator | 2025-09-19 01:02:17 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:17.574802 | orchestrator | 2025-09-19 01:02:17 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:17.576663 | orchestrator | 2025-09-19 01:02:17 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:17.577557 | orchestrator | 2025-09-19 01:02:17 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:17.577606 | orchestrator | 2025-09-19 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:20.603454 | orchestrator | 2025-09-19 01:02:20 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:20.603538 | orchestrator | 2025-09-19 01:02:20 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:20.603916 | orchestrator | 2025-09-19 01:02:20 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:20.604502 | orchestrator | 2025-09-19 01:02:20 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:20.604524 | orchestrator | 2025-09-19 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:23.632058 | orchestrator | 2025-09-19 01:02:23 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:23.634288 | orchestrator | 2025-09-19 01:02:23 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:23.634896 | orchestrator | 2025-09-19 01:02:23 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:23.635395 | orchestrator | 2025-09-19 01:02:23 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:23.635470 | orchestrator | 2025-09-19 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:26.671407 | orchestrator | 2025-09-19 01:02:26 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:26.671487 | orchestrator | 2025-09-19 01:02:26 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:26.671501 | orchestrator | 2025-09-19 01:02:26 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:26.671512 | orchestrator | 2025-09-19 01:02:26 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:26.671523 | orchestrator | 2025-09-19 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:29.725828 | orchestrator | 2025-09-19 01:02:29 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:29.725900 | orchestrator | 2025-09-19 01:02:29 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:29.726550 | orchestrator | 2025-09-19 01:02:29 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:29.726933 | orchestrator | 2025-09-19 01:02:29 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:29.726967 | orchestrator | 2025-09-19 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:32.755400 | orchestrator | 2025-09-19 01:02:32 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:32.756361 | orchestrator | 2025-09-19 01:02:32 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state STARTED 2025-09-19 01:02:32.757879 | orchestrator | 2025-09-19 01:02:32 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:32.758357 | orchestrator | 2025-09-19 01:02:32 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:32.758587 | orchestrator | 2025-09-19 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:35.786799 | orchestrator | 2025-09-19 01:02:35 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:35.787559 | orchestrator | 2025-09-19 01:02:35 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:35.789856 | orchestrator | 2025-09-19 01:02:35 | INFO  | Task 6b8b2566-bcd3-42aa-9052-fb875c0df2ac is in state SUCCESS 2025-09-19 01:02:35.791233 | orchestrator | 2025-09-19 01:02:35.791270 | orchestrator | 2025-09-19 01:02:35.791282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:02:35.791411 | orchestrator | 2025-09-19 01:02:35.791434 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:02:35.791457 | orchestrator | Friday 19 September 2025 01:00:57 +0000 (0:00:00.238) 0:00:00.238 ****** 2025-09-19 01:02:35.791503 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:02:35.791516 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:02:35.791526 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:02:35.791537 | orchestrator | ok: [testbed-manager] 2025-09-19 01:02:35.791548 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:02:35.791558 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:02:35.791569 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:02:35.791585 | orchestrator | 2025-09-19 01:02:35.791604 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:02:35.791624 | orchestrator | Friday 19 September 2025 01:00:58 +0000 (0:00:00.679) 0:00:00.918 ****** 2025-09-19 01:02:35.791644 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.791866 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.791893 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.791917 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.791939 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.791965 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.791988 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-19 01:02:35.792001 | orchestrator | 2025-09-19 01:02:35.792014 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 01:02:35.792027 | orchestrator | 2025-09-19 01:02:35.792040 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-19 01:02:35.792052 | orchestrator | Friday 19 September 2025 01:00:59 +0000 (0:00:01.042) 0:00:01.960 ****** 2025-09-19 01:02:35.792066 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:02:35.792084 | orchestrator | 2025-09-19 01:02:35.792103 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-19 01:02:35.792123 | orchestrator | Friday 19 September 2025 01:01:00 +0000 (0:00:01.375) 0:00:03.336 ****** 2025-09-19 01:02:35.792142 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-19 01:02:35.792162 | orchestrator | 2025-09-19 01:02:35.792200 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-19 01:02:35.792223 | orchestrator | Friday 19 September 2025 01:01:04 +0000 (0:00:03.505) 0:00:06.842 ****** 2025-09-19 01:02:35.792244 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-19 01:02:35.792266 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-19 01:02:35.792285 | orchestrator | 2025-09-19 01:02:35.792330 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-19 01:02:35.792343 | orchestrator | Friday 19 September 2025 01:01:10 +0000 (0:00:06.695) 0:00:13.537 ****** 2025-09-19 01:02:35.792356 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:02:35.792376 | orchestrator | 2025-09-19 01:02:35.792397 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-19 01:02:35.792418 | orchestrator | Friday 19 September 2025 01:01:13 +0000 (0:00:03.171) 0:00:16.708 ****** 2025-09-19 01:02:35.792439 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:02:35.792461 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-19 01:02:35.792527 | orchestrator | 2025-09-19 01:02:35.792542 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-19 01:02:35.792553 | orchestrator | Friday 19 September 2025 01:01:17 +0000 (0:00:03.875) 0:00:20.584 ****** 2025-09-19 01:02:35.792590 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:02:35.792602 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-19 01:02:35.792612 | orchestrator | 2025-09-19 01:02:35.792802 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-19 01:02:35.792844 | orchestrator | Friday 19 September 2025 01:01:24 +0000 (0:00:06.497) 0:00:27.082 ****** 2025-09-19 01:02:35.792855 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-19 01:02:35.792866 | orchestrator | 2025-09-19 01:02:35.792876 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:02:35.792887 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.792899 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.792910 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.792920 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.792931 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.792962 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.792983 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:02:35.793003 | orchestrator | 2025-09-19 01:02:35.793023 | orchestrator | 2025-09-19 01:02:35.793041 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:02:35.793062 | orchestrator | Friday 19 September 2025 01:01:29 +0000 (0:00:04.700) 0:00:31.783 ****** 2025-09-19 01:02:35.793080 | orchestrator | =============================================================================== 2025-09-19 01:02:35.793101 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.70s 2025-09-19 01:02:35.793120 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.50s 2025-09-19 01:02:35.793138 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.70s 2025-09-19 01:02:35.793149 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.88s 2025-09-19 01:02:35.793160 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.51s 2025-09-19 01:02:35.793170 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.17s 2025-09-19 01:02:35.793181 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.38s 2025-09-19 01:02:35.793191 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-09-19 01:02:35.793202 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2025-09-19 01:02:35.793212 | orchestrator | 2025-09-19 01:02:35.793224 | orchestrator | 2025-09-19 01:02:35.793235 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:02:35.793245 | orchestrator | 2025-09-19 01:02:35.793256 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:02:35.793267 | orchestrator | Friday 19 September 2025 00:59:49 +0000 (0:00:00.277) 0:00:00.277 ****** 2025-09-19 01:02:35.793297 | orchestrator | ok: [testbed-manager] 2025-09-19 01:02:35.793333 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:02:35.793343 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:02:35.793354 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:02:35.793365 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:02:35.793375 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:02:35.793386 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:02:35.793408 | orchestrator | 2025-09-19 01:02:35.793419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:02:35.793430 | orchestrator | Friday 19 September 2025 00:59:50 +0000 (0:00:00.821) 0:00:01.098 ****** 2025-09-19 01:02:35.793451 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793462 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793473 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793484 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793503 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793522 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793543 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-19 01:02:35.793562 | orchestrator | 2025-09-19 01:02:35.793581 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-19 01:02:35.793600 | orchestrator | 2025-09-19 01:02:35.793620 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 01:02:35.793641 | orchestrator | Friday 19 September 2025 00:59:51 +0000 (0:00:00.680) 0:00:01.779 ****** 2025-09-19 01:02:35.793661 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:02:35.793682 | orchestrator | 2025-09-19 01:02:35.793694 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-19 01:02:35.793705 | orchestrator | Friday 19 September 2025 00:59:52 +0000 (0:00:01.436) 0:00:03.215 ****** 2025-09-19 01:02:35.793732 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 01:02:35.793773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.793796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.793839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.793861 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.793971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.793985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794158 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794216 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794428 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 01:02:35.794443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.794591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794648 | orchestrator | 2025-09-19 01:02:35.794666 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 01:02:35.794685 | orchestrator | Friday 19 September 2025 00:59:55 +0000 (0:00:02.829) 0:00:06.045 ****** 2025-09-19 01:02:35.794708 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:02:35.794719 | orchestrator | 2025-09-19 01:02:35.794728 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-19 01:02:35.794738 | orchestrator | Friday 19 September 2025 00:59:56 +0000 (0:00:01.296) 0:00:07.342 ****** 2025-09-19 01:02:35.794749 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 01:02:35.794773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.794951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.794997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795017 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795106 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 01:02:35.795129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.795208 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.795280 | orchestrator | 2025-09-19 01:02:35.795328 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-19 01:02:35.795346 | orchestrator | Friday 19 September 2025 01:00:02 +0000 (0:00:05.965) 0:00:13.308 ****** 2025-09-19 01:02:35.795358 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 01:02:35.795395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795406 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795417 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 01:02:35.795427 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795453 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.795464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795614 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.795624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795668 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.795685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.795735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.795746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795817 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.795843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.795882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.795945 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.795955 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.795965 | orchestrator | 2025-09-19 01:02:35.795975 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-19 01:02:35.795985 | orchestrator | Friday 19 September 2025 01:00:05 +0000 (0:00:02.208) 0:00:15.516 ****** 2025-09-19 01:02:35.795995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796065 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 01:02:35.796098 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796117 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796147 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 01:02:35.796168 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796186 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.796201 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.796212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796273 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.796289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 01:02:35.796369 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.796393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796457 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.796475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796543 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.796557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 01:02:35.796568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 01:02:35.796593 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.796602 | orchestrator | 2025-09-19 01:02:35.796612 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-19 01:02:35.796624 | orchestrator | Friday 19 September 2025 01:00:07 +0000 (0:00:02.715) 0:00:18.232 ****** 2025-09-19 01:02:35.796651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 01:02:35.796670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.796810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.796821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.796837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.796848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.796858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.796875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.796885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.796900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.796910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.796920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.796935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.796946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 01:02:35.796971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.796989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.797015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.797034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.797065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.797084 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.797099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.797109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.797124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.797134 | orchestrator | 2025-09-19 01:02:35.797143 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-19 01:02:35.797153 | orchestrator | Friday 19 September 2025 01:00:13 +0000 (0:00:06.074) 0:00:24.307 ****** 2025-09-19 01:02:35.797163 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 01:02:35.797173 | orchestrator | 2025-09-19 01:02:35.797190 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-19 01:02:35.797206 | orchestrator | Friday 19 September 2025 01:00:14 +0000 (0:00:00.968) 0:00:25.275 ****** 2025-09-19 01:02:35.797234 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797263 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797282 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797357 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797371 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797387 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797397 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.797432 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1074188, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.120998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797458 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797500 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797521 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797549 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797580 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797599 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797631 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797646 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797656 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797691 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1074361, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1383095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.797711 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797767 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797792 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797810 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797929 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797940 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797948 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797956 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797965 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.797977 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1074109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.797986 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798049 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798069 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798077 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798085 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798098 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798106 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798162 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798180 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798195 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1074208, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1234372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.798209 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798225 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798247 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798269 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798278 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798331 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798342 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798350 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798358 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798373 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798396 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798411 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798460 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1074105, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.798487 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798502 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798517 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798546 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798555 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798588 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798598 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798614 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798627 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798639 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798647 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798678 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798687 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798695 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798703 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798716 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798728 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798736 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798749 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1074190, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1212735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.798757 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798765 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798773 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798789 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798801 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798813 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798836 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798853 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798869 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798883 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798911 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798934 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798950 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798964 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798973 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1074205, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1230512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.798981 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.798995 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799006 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799025 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799041 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799063 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799079 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799095 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.799110 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799131 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799153 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799161 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799174 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799183 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799191 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.799199 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799212 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799220 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.799228 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1074194, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1217134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799240 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.799256 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799268 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799276 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.799285 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799327 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 01:02:35.799343 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.799357 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1074184, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799372 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074360, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799393 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074100, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1045444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799410 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1074385, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1407287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799432 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1074213, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1377594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799445 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1074106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.105986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799468 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1074104, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.10529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1074201, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799484 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1074199, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1222231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1074380, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1397057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 01:02:35.799513 | orchestrator | 2025-09-19 01:02:35.799527 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-19 01:02:35.799542 | orchestrator | Friday 19 September 2025 01:00:39 +0000 (0:00:24.295) 0:00:49.570 ****** 2025-09-19 01:02:35.799556 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 01:02:35.799572 | orchestrator | 2025-09-19 01:02:35.799586 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-19 01:02:35.799601 | orchestrator | Friday 19 September 2025 01:00:39 +0000 (0:00:00.756) 0:00:50.327 ****** 2025-09-19 01:02:35.799616 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799629 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799637 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799654 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799667 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 01:02:35.799682 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799690 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799698 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799706 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799714 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799722 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 01:02:35.799729 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799745 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799761 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799768 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799784 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799799 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799807 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799831 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799846 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799860 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799874 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799888 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799902 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799931 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799939 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.799947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799955 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-19 01:02:35.799963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 01:02:35.799971 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-19 01:02:35.799980 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 01:02:35.799994 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 01:02:35.800008 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 01:02:35.800023 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 01:02:35.800037 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 01:02:35.800052 | orchestrator | 2025-09-19 01:02:35.800066 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-19 01:02:35.800081 | orchestrator | Friday 19 September 2025 01:00:41 +0000 (0:00:01.863) 0:00:52.190 ****** 2025-09-19 01:02:35.800095 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 01:02:35.800106 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.800114 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 01:02:35.800121 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.800129 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 01:02:35.800137 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.800156 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 01:02:35.800164 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 01:02:35.800172 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.800179 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.800187 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 01:02:35.800195 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.800203 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-19 01:02:35.800210 | orchestrator | 2025-09-19 01:02:35.800218 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-19 01:02:35.800226 | orchestrator | Friday 19 September 2025 01:00:55 +0000 (0:00:13.809) 0:01:06.000 ****** 2025-09-19 01:02:35.800234 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 01:02:35.800242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.800250 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 01:02:35.800257 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.800265 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 01:02:35.800273 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.800281 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 01:02:35.800289 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.800347 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 01:02:35.800359 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.800367 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 01:02:35.800375 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.800383 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-19 01:02:35.800392 | orchestrator | 2025-09-19 01:02:35.800406 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-19 01:02:35.800421 | orchestrator | Friday 19 September 2025 01:00:58 +0000 (0:00:02.635) 0:01:08.635 ****** 2025-09-19 01:02:35.800436 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 01:02:35.800452 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.800466 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 01:02:35.800481 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.800497 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 01:02:35.800512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.800526 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 01:02:35.800542 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.800557 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 01:02:35.800569 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.800578 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-19 01:02:35.800592 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 01:02:35.800621 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.800635 | orchestrator | 2025-09-19 01:02:35.800650 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-19 01:02:35.800664 | orchestrator | Friday 19 September 2025 01:00:59 +0000 (0:00:01.538) 0:01:10.174 ****** 2025-09-19 01:02:35.800679 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 01:02:35.800691 | orchestrator | 2025-09-19 01:02:35.800699 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-19 01:02:35.800707 | orchestrator | Friday 19 September 2025 01:01:00 +0000 (0:00:00.693) 0:01:10.868 ****** 2025-09-19 01:02:35.800715 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.800722 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.800729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.800735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.800742 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.800748 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.800755 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.800761 | orchestrator | 2025-09-19 01:02:35.800768 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-19 01:02:35.800774 | orchestrator | Friday 19 September 2025 01:01:01 +0000 (0:00:00.612) 0:01:11.481 ****** 2025-09-19 01:02:35.800781 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.800788 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.800794 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.800801 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.800807 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:02:35.800814 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:02:35.800820 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:02:35.800827 | orchestrator | 2025-09-19 01:02:35.800838 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-19 01:02:35.800844 | orchestrator | Friday 19 September 2025 01:01:03 +0000 (0:00:02.043) 0:01:13.524 ****** 2025-09-19 01:02:35.800851 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800858 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800864 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.800871 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800878 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.800891 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.800898 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.800904 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800911 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.800917 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800924 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.800930 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 01:02:35.800937 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.800949 | orchestrator | 2025-09-19 01:02:35.800961 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-19 01:02:35.800979 | orchestrator | Friday 19 September 2025 01:01:04 +0000 (0:00:01.235) 0:01:14.759 ****** 2025-09-19 01:02:35.800992 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 01:02:35.801003 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 01:02:35.801015 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.801035 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.801047 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 01:02:35.801057 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.801064 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 01:02:35.801071 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.801077 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-19 01:02:35.801084 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 01:02:35.801091 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.801103 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 01:02:35.801115 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.801127 | orchestrator | 2025-09-19 01:02:35.801140 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-19 01:02:35.801153 | orchestrator | Friday 19 September 2025 01:01:05 +0000 (0:00:01.219) 0:01:15.979 ****** 2025-09-19 01:02:35.801165 | orchestrator | [WARNING]: Skipped 2025-09-19 01:02:35.801178 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-19 01:02:35.801190 | orchestrator | due to this access issue: 2025-09-19 01:02:35.801202 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-19 01:02:35.801209 | orchestrator | not a directory 2025-09-19 01:02:35.801216 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 01:02:35.801223 | orchestrator | 2025-09-19 01:02:35.801229 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-19 01:02:35.801236 | orchestrator | Friday 19 September 2025 01:01:06 +0000 (0:00:00.987) 0:01:16.967 ****** 2025-09-19 01:02:35.801242 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.801249 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.801255 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.801262 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.801268 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.801275 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.801282 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.801288 | orchestrator | 2025-09-19 01:02:35.801295 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-19 01:02:35.801317 | orchestrator | Friday 19 September 2025 01:01:07 +0000 (0:00:00.730) 0:01:17.697 ****** 2025-09-19 01:02:35.801324 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.801331 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:02:35.801337 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:02:35.801344 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:02:35.801351 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:02:35.801357 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:02:35.801364 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:02:35.801370 | orchestrator | 2025-09-19 01:02:35.801377 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-19 01:02:35.801384 | orchestrator | Friday 19 September 2025 01:01:08 +0000 (0:00:00.731) 0:01:18.428 ****** 2025-09-19 01:02:35.801398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 01:02:35.801417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 01:02:35.801550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 01:02:35.801609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 01:02:35.801749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 01:02:35.801775 | orchestrator | 2025-09-19 01:02:35.801782 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-19 01:02:35.801788 | orchestrator | Friday 19 September 2025 01:01:12 +0000 (0:00:04.743) 0:01:23.172 ****** 2025-09-19 01:02:35.801795 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 01:02:35.801801 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:02:35.801808 | orchestrator | 2025-09-19 01:02:35.801815 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801821 | orchestrator | Friday 19 September 2025 01:01:13 +0000 (0:00:01.093) 0:01:24.265 ****** 2025-09-19 01:02:35.801828 | orchestrator | 2025-09-19 01:02:35.801837 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801844 | orchestrator | Friday 19 September 2025 01:01:13 +0000 (0:00:00.064) 0:01:24.329 ****** 2025-09-19 01:02:35.801850 | orchestrator | 2025-09-19 01:02:35.801857 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801864 | orchestrator | Friday 19 September 2025 01:01:13 +0000 (0:00:00.058) 0:01:24.388 ****** 2025-09-19 01:02:35.801870 | orchestrator | 2025-09-19 01:02:35.801881 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801892 | orchestrator | Friday 19 September 2025 01:01:14 +0000 (0:00:00.237) 0:01:24.625 ****** 2025-09-19 01:02:35.801904 | orchestrator | 2025-09-19 01:02:35.801917 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801928 | orchestrator | Friday 19 September 2025 01:01:14 +0000 (0:00:00.065) 0:01:24.691 ****** 2025-09-19 01:02:35.801940 | orchestrator | 2025-09-19 01:02:35.801952 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801965 | orchestrator | Friday 19 September 2025 01:01:14 +0000 (0:00:00.063) 0:01:24.754 ****** 2025-09-19 01:02:35.801976 | orchestrator | 2025-09-19 01:02:35.801986 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 01:02:35.801993 | orchestrator | Friday 19 September 2025 01:01:14 +0000 (0:00:00.062) 0:01:24.816 ****** 2025-09-19 01:02:35.801999 | orchestrator | 2025-09-19 01:02:35.802006 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-19 01:02:35.802051 | orchestrator | Friday 19 September 2025 01:01:14 +0000 (0:00:00.093) 0:01:24.910 ****** 2025-09-19 01:02:35.802068 | orchestrator | changed: [testbed-manager] 2025-09-19 01:02:35.802081 | orchestrator | 2025-09-19 01:02:35.802094 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-19 01:02:35.802114 | orchestrator | Friday 19 September 2025 01:01:33 +0000 (0:00:18.551) 0:01:43.462 ****** 2025-09-19 01:02:35.802127 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:02:35.802136 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:02:35.802142 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:02:35.802149 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:02:35.802156 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:02:35.802162 | orchestrator | changed: [testbed-manager] 2025-09-19 01:02:35.802169 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:02:35.802175 | orchestrator | 2025-09-19 01:02:35.802182 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-19 01:02:35.802188 | orchestrator | Friday 19 September 2025 01:01:47 +0000 (0:00:14.152) 0:01:57.614 ****** 2025-09-19 01:02:35.802195 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:02:35.802201 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:02:35.802208 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:02:35.802214 | orchestrator | 2025-09-19 01:02:35.802221 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-19 01:02:35.802227 | orchestrator | Friday 19 September 2025 01:01:51 +0000 (0:00:04.739) 0:02:02.354 ****** 2025-09-19 01:02:35.802234 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:02:35.802240 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:02:35.802247 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:02:35.802260 | orchestrator | 2025-09-19 01:02:35.802267 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-19 01:02:35.802273 | orchestrator | Friday 19 September 2025 01:01:57 +0000 (0:00:05.405) 0:02:07.759 ****** 2025-09-19 01:02:35.802280 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:02:35.802286 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:02:35.802293 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:02:35.802317 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:02:35.802324 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:02:35.802331 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:02:35.802341 | orchestrator | changed: [testbed-manager] 2025-09-19 01:02:35.802352 | orchestrator | 2025-09-19 01:02:35.802364 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-19 01:02:35.802375 | orchestrator | Friday 19 September 2025 01:02:08 +0000 (0:00:11.532) 0:02:19.292 ****** 2025-09-19 01:02:35.802387 | orchestrator | changed: [testbed-manager] 2025-09-19 01:02:35.802399 | orchestrator | 2025-09-19 01:02:35.802411 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-19 01:02:35.802423 | orchestrator | Friday 19 September 2025 01:02:16 +0000 (0:00:07.729) 0:02:27.021 ****** 2025-09-19 01:02:35.802436 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:02:35.802448 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:02:35.802458 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:02:35.802465 | orchestrator | 2025-09-19 01:02:35.802471 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-19 01:02:35.802478 | orchestrator | Friday 19 September 2025 01:02:22 +0000 (0:00:05.627) 0:02:32.648 ****** 2025-09-19 01:02:35.802485 | orchestrator | changed: [testbed-manager] 2025-09-19 01:02:35.802491 | orchestrator | 2025-09-19 01:02:35.802498 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-19 01:02:35.802504 | orchestrator | Friday 19 September 2025 01:02:27 +0000 (0:00:04.804) 0:02:37.453 ****** 2025-09-19 01:02:35.802511 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:02:35.802517 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:02:35.802524 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:02:35.802531 | orchestrator | 2025-09-19 01:02:35.802537 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:02:35.802544 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 01:02:35.802551 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 01:02:35.802558 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 01:02:35.802569 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 01:02:35.802576 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 01:02:35.802583 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 01:02:35.802594 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 01:02:35.802606 | orchestrator | 2025-09-19 01:02:35.802618 | orchestrator | 2025-09-19 01:02:35.802630 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:02:35.802642 | orchestrator | Friday 19 September 2025 01:02:33 +0000 (0:00:06.444) 0:02:43.898 ****** 2025-09-19 01:02:35.802654 | orchestrator | =============================================================================== 2025-09-19 01:02:35.802679 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.30s 2025-09-19 01:02:35.802692 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.55s 2025-09-19 01:02:35.802704 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.15s 2025-09-19 01:02:35.802712 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.81s 2025-09-19 01:02:35.802724 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 11.53s 2025-09-19 01:02:35.802731 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.73s 2025-09-19 01:02:35.802738 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.44s 2025-09-19 01:02:35.802744 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.07s 2025-09-19 01:02:35.802751 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.97s 2025-09-19 01:02:35.802757 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.63s 2025-09-19 01:02:35.802764 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.41s 2025-09-19 01:02:35.802770 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.80s 2025-09-19 01:02:35.802777 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.74s 2025-09-19 01:02:35.802783 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.74s 2025-09-19 01:02:35.802790 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.83s 2025-09-19 01:02:35.802796 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.72s 2025-09-19 01:02:35.802803 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.64s 2025-09-19 01:02:35.802809 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.21s 2025-09-19 01:02:35.802816 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.04s 2025-09-19 01:02:35.802823 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.86s 2025-09-19 01:02:35.802829 | orchestrator | 2025-09-19 01:02:35 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state STARTED 2025-09-19 01:02:35.802836 | orchestrator | 2025-09-19 01:02:35 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:35.802843 | orchestrator | 2025-09-19 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:38.817685 | orchestrator | 2025-09-19 01:02:38 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:38.817903 | orchestrator | 2025-09-19 01:02:38 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:38.817952 | orchestrator | 2025-09-19 01:02:38 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:38.818686 | orchestrator | 2025-09-19 01:02:38 | INFO  | Task 38e5d12f-6e15-4d77-a907-33b0af8c691f is in state SUCCESS 2025-09-19 01:02:38.819265 | orchestrator | 2025-09-19 01:02:38 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:38.819329 | orchestrator | 2025-09-19 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:41.847418 | orchestrator | 2025-09-19 01:02:41 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:41.847649 | orchestrator | 2025-09-19 01:02:41 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:41.848140 | orchestrator | 2025-09-19 01:02:41 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:41.848191 | orchestrator | 2025-09-19 01:02:41 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:41.848229 | orchestrator | 2025-09-19 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:44.874579 | orchestrator | 2025-09-19 01:02:44 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:44.876130 | orchestrator | 2025-09-19 01:02:44 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:44.876627 | orchestrator | 2025-09-19 01:02:44 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:44.880256 | orchestrator | 2025-09-19 01:02:44 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:44.880340 | orchestrator | 2025-09-19 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:47.906051 | orchestrator | 2025-09-19 01:02:47 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:47.906345 | orchestrator | 2025-09-19 01:02:47 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:47.907052 | orchestrator | 2025-09-19 01:02:47 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:47.907511 | orchestrator | 2025-09-19 01:02:47 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:47.907561 | orchestrator | 2025-09-19 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:50.931945 | orchestrator | 2025-09-19 01:02:50 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:50.932029 | orchestrator | 2025-09-19 01:02:50 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:50.932209 | orchestrator | 2025-09-19 01:02:50 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:50.932808 | orchestrator | 2025-09-19 01:02:50 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:50.932840 | orchestrator | 2025-09-19 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:53.958506 | orchestrator | 2025-09-19 01:02:53 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:53.958589 | orchestrator | 2025-09-19 01:02:53 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:53.959663 | orchestrator | 2025-09-19 01:02:53 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:53.959894 | orchestrator | 2025-09-19 01:02:53 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:53.960106 | orchestrator | 2025-09-19 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:02:57.003744 | orchestrator | 2025-09-19 01:02:57 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:02:57.004150 | orchestrator | 2025-09-19 01:02:57 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:02:57.005111 | orchestrator | 2025-09-19 01:02:57 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:02:57.006142 | orchestrator | 2025-09-19 01:02:57 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:02:57.006174 | orchestrator | 2025-09-19 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:00.049206 | orchestrator | 2025-09-19 01:03:00 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:00.052560 | orchestrator | 2025-09-19 01:03:00 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:00.054013 | orchestrator | 2025-09-19 01:03:00 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:00.055553 | orchestrator | 2025-09-19 01:03:00 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:00.055580 | orchestrator | 2025-09-19 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:03.098180 | orchestrator | 2025-09-19 01:03:03 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:03.098906 | orchestrator | 2025-09-19 01:03:03 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:03.100129 | orchestrator | 2025-09-19 01:03:03 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:03.101164 | orchestrator | 2025-09-19 01:03:03 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:03.101641 | orchestrator | 2025-09-19 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:06.143063 | orchestrator | 2025-09-19 01:03:06 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:06.143734 | orchestrator | 2025-09-19 01:03:06 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:06.144867 | orchestrator | 2025-09-19 01:03:06 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:06.145724 | orchestrator | 2025-09-19 01:03:06 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:06.145928 | orchestrator | 2025-09-19 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:09.195142 | orchestrator | 2025-09-19 01:03:09 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:09.196023 | orchestrator | 2025-09-19 01:03:09 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:09.197616 | orchestrator | 2025-09-19 01:03:09 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:09.199050 | orchestrator | 2025-09-19 01:03:09 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:09.199091 | orchestrator | 2025-09-19 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:12.242717 | orchestrator | 2025-09-19 01:03:12 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:12.243261 | orchestrator | 2025-09-19 01:03:12 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:12.245002 | orchestrator | 2025-09-19 01:03:12 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:12.247595 | orchestrator | 2025-09-19 01:03:12 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:12.247650 | orchestrator | 2025-09-19 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:15.294886 | orchestrator | 2025-09-19 01:03:15 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:15.296571 | orchestrator | 2025-09-19 01:03:15 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:15.298758 | orchestrator | 2025-09-19 01:03:15 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:15.300582 | orchestrator | 2025-09-19 01:03:15 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:15.300861 | orchestrator | 2025-09-19 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:18.340635 | orchestrator | 2025-09-19 01:03:18 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:18.341981 | orchestrator | 2025-09-19 01:03:18 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:18.343853 | orchestrator | 2025-09-19 01:03:18 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:18.345493 | orchestrator | 2025-09-19 01:03:18 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:18.345697 | orchestrator | 2025-09-19 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:21.386664 | orchestrator | 2025-09-19 01:03:21 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:21.388167 | orchestrator | 2025-09-19 01:03:21 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:21.389952 | orchestrator | 2025-09-19 01:03:21 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:21.391531 | orchestrator | 2025-09-19 01:03:21 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:21.391556 | orchestrator | 2025-09-19 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:24.435658 | orchestrator | 2025-09-19 01:03:24 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:24.437120 | orchestrator | 2025-09-19 01:03:24 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:24.438136 | orchestrator | 2025-09-19 01:03:24 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:24.439722 | orchestrator | 2025-09-19 01:03:24 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:24.439830 | orchestrator | 2025-09-19 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:27.489247 | orchestrator | 2025-09-19 01:03:27 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:27.490755 | orchestrator | 2025-09-19 01:03:27 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:27.492134 | orchestrator | 2025-09-19 01:03:27 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:27.493342 | orchestrator | 2025-09-19 01:03:27 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:27.493384 | orchestrator | 2025-09-19 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:30.526654 | orchestrator | 2025-09-19 01:03:30 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:30.527639 | orchestrator | 2025-09-19 01:03:30 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:30.530242 | orchestrator | 2025-09-19 01:03:30 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:30.531451 | orchestrator | 2025-09-19 01:03:30 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:30.531823 | orchestrator | 2025-09-19 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:33.575751 | orchestrator | 2025-09-19 01:03:33 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:33.576374 | orchestrator | 2025-09-19 01:03:33 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:33.577512 | orchestrator | 2025-09-19 01:03:33 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:33.578617 | orchestrator | 2025-09-19 01:03:33 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:33.578699 | orchestrator | 2025-09-19 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:36.628415 | orchestrator | 2025-09-19 01:03:36 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:36.630412 | orchestrator | 2025-09-19 01:03:36 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:36.632197 | orchestrator | 2025-09-19 01:03:36 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:36.634361 | orchestrator | 2025-09-19 01:03:36 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:36.634404 | orchestrator | 2025-09-19 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:39.683694 | orchestrator | 2025-09-19 01:03:39 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:39.687032 | orchestrator | 2025-09-19 01:03:39 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:39.688607 | orchestrator | 2025-09-19 01:03:39 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:39.689920 | orchestrator | 2025-09-19 01:03:39 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:39.690197 | orchestrator | 2025-09-19 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:42.730360 | orchestrator | 2025-09-19 01:03:42 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:42.730755 | orchestrator | 2025-09-19 01:03:42 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:42.731662 | orchestrator | 2025-09-19 01:03:42 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:42.734202 | orchestrator | 2025-09-19 01:03:42 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:42.734272 | orchestrator | 2025-09-19 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:45.760595 | orchestrator | 2025-09-19 01:03:45 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:45.760956 | orchestrator | 2025-09-19 01:03:45 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:45.761672 | orchestrator | 2025-09-19 01:03:45 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:45.762880 | orchestrator | 2025-09-19 01:03:45 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:45.762906 | orchestrator | 2025-09-19 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:48.795813 | orchestrator | 2025-09-19 01:03:48 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:48.796895 | orchestrator | 2025-09-19 01:03:48 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:48.797748 | orchestrator | 2025-09-19 01:03:48 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:48.798210 | orchestrator | 2025-09-19 01:03:48 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:48.798827 | orchestrator | 2025-09-19 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:51.825740 | orchestrator | 2025-09-19 01:03:51 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:51.827135 | orchestrator | 2025-09-19 01:03:51 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:51.829009 | orchestrator | 2025-09-19 01:03:51 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:51.830307 | orchestrator | 2025-09-19 01:03:51 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:51.830339 | orchestrator | 2025-09-19 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:54.886443 | orchestrator | 2025-09-19 01:03:54 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:54.888860 | orchestrator | 2025-09-19 01:03:54 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:54.890186 | orchestrator | 2025-09-19 01:03:54 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:54.891857 | orchestrator | 2025-09-19 01:03:54 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:54.891900 | orchestrator | 2025-09-19 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:03:57.943181 | orchestrator | 2025-09-19 01:03:57 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:03:57.945263 | orchestrator | 2025-09-19 01:03:57 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:03:57.947796 | orchestrator | 2025-09-19 01:03:57 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:03:57.950421 | orchestrator | 2025-09-19 01:03:57 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:03:57.950916 | orchestrator | 2025-09-19 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:00.991586 | orchestrator | 2025-09-19 01:04:00 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:00.993466 | orchestrator | 2025-09-19 01:04:00 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:00.995773 | orchestrator | 2025-09-19 01:04:00 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state STARTED 2025-09-19 01:04:00.998130 | orchestrator | 2025-09-19 01:04:00 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:00.998155 | orchestrator | 2025-09-19 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:04.054874 | orchestrator | 2025-09-19 01:04:04 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:04.055134 | orchestrator | 2025-09-19 01:04:04 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:04.056383 | orchestrator | 2025-09-19 01:04:04 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:04.057873 | orchestrator | 2025-09-19 01:04:04 | INFO  | Task 7b14374c-170f-4333-9bd7-41ce280edda4 is in state SUCCESS 2025-09-19 01:04:04.059729 | orchestrator | 2025-09-19 01:04:04.059769 | orchestrator | 2025-09-19 01:04:04.059822 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-19 01:04:04.059836 | orchestrator | 2025-09-19 01:04:04.059848 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-19 01:04:04.059860 | orchestrator | Friday 19 September 2025 00:56:24 +0000 (0:00:00.131) 0:00:00.131 ****** 2025-09-19 01:04:04.059873 | orchestrator | changed: [localhost] 2025-09-19 01:04:04.059886 | orchestrator | 2025-09-19 01:04:04.059898 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-19 01:04:04.059910 | orchestrator | Friday 19 September 2025 00:56:26 +0000 (0:00:01.280) 0:00:01.412 ****** 2025-09-19 01:04:04.059922 | orchestrator | 2025-09-19 01:04:04.059934 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.059946 | orchestrator | 2025-09-19 01:04:04.059958 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.059997 | orchestrator | 2025-09-19 01:04:04.060010 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.060023 | orchestrator | 2025-09-19 01:04:04.060035 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.060047 | orchestrator | 2025-09-19 01:04:04.060059 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.060071 | orchestrator | 2025-09-19 01:04:04.060083 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.060094 | orchestrator | 2025-09-19 01:04:04.060120 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-19 01:04:04.060132 | orchestrator | changed: [localhost] 2025-09-19 01:04:04.060144 | orchestrator | 2025-09-19 01:04:04.060156 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-19 01:04:04.060168 | orchestrator | Friday 19 September 2025 01:02:22 +0000 (0:05:55.995) 0:05:57.408 ****** 2025-09-19 01:04:04.060179 | orchestrator | changed: [localhost] 2025-09-19 01:04:04.060191 | orchestrator | 2025-09-19 01:04:04.060203 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:04:04.060215 | orchestrator | 2025-09-19 01:04:04.060227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:04:04.060270 | orchestrator | Friday 19 September 2025 01:02:35 +0000 (0:00:13.332) 0:06:10.740 ****** 2025-09-19 01:04:04.060281 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:04:04.060292 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:04:04.060303 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:04:04.060313 | orchestrator | 2025-09-19 01:04:04.060327 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:04:04.060340 | orchestrator | Friday 19 September 2025 01:02:36 +0000 (0:00:00.616) 0:06:11.357 ****** 2025-09-19 01:04:04.060354 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-19 01:04:04.060367 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-19 01:04:04.060380 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-19 01:04:04.060394 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-19 01:04:04.060407 | orchestrator | 2025-09-19 01:04:04.060420 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-19 01:04:04.060433 | orchestrator | skipping: no hosts matched 2025-09-19 01:04:04.060447 | orchestrator | 2025-09-19 01:04:04.060460 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:04:04.060474 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:04:04.060490 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:04:04.060505 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:04:04.060518 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:04:04.060532 | orchestrator | 2025-09-19 01:04:04.060546 | orchestrator | 2025-09-19 01:04:04.060559 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:04:04.060572 | orchestrator | Friday 19 September 2025 01:02:37 +0000 (0:00:00.855) 0:06:12.213 ****** 2025-09-19 01:04:04.060586 | orchestrator | =============================================================================== 2025-09-19 01:04:04.060600 | orchestrator | Download ironic-agent initramfs --------------------------------------- 356.00s 2025-09-19 01:04:04.060612 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.33s 2025-09-19 01:04:04.060624 | orchestrator | Ensure the destination directory exists --------------------------------- 1.28s 2025-09-19 01:04:04.060645 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-09-19 01:04:04.060658 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-09-19 01:04:04.060670 | orchestrator | 2025-09-19 01:04:04.060682 | orchestrator | 2025-09-19 01:04:04.060693 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:04:04.060703 | orchestrator | 2025-09-19 01:04:04.060714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:04:04.060725 | orchestrator | Friday 19 September 2025 01:01:22 +0000 (0:00:00.231) 0:00:00.231 ****** 2025-09-19 01:04:04.060736 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:04:04.060747 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:04:04.060758 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:04:04.060769 | orchestrator | 2025-09-19 01:04:04.060780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:04:04.060790 | orchestrator | Friday 19 September 2025 01:01:22 +0000 (0:00:00.275) 0:00:00.507 ****** 2025-09-19 01:04:04.060801 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-19 01:04:04.060812 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-19 01:04:04.060836 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-19 01:04:04.060848 | orchestrator | 2025-09-19 01:04:04.060859 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-19 01:04:04.060869 | orchestrator | 2025-09-19 01:04:04.060880 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 01:04:04.060891 | orchestrator | Friday 19 September 2025 01:01:23 +0000 (0:00:00.362) 0:00:00.869 ****** 2025-09-19 01:04:04.060902 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:04:04.060913 | orchestrator | 2025-09-19 01:04:04.060924 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-19 01:04:04.060935 | orchestrator | Friday 19 September 2025 01:01:23 +0000 (0:00:00.488) 0:00:01.357 ****** 2025-09-19 01:04:04.060946 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-19 01:04:04.060957 | orchestrator | 2025-09-19 01:04:04.060968 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-19 01:04:04.060979 | orchestrator | Friday 19 September 2025 01:01:26 +0000 (0:00:03.316) 0:00:04.674 ****** 2025-09-19 01:04:04.060989 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-19 01:04:04.061006 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-19 01:04:04.061017 | orchestrator | 2025-09-19 01:04:04.061028 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-19 01:04:04.061039 | orchestrator | Friday 19 September 2025 01:01:33 +0000 (0:00:06.404) 0:00:11.078 ****** 2025-09-19 01:04:04.061050 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:04:04.061061 | orchestrator | 2025-09-19 01:04:04.061072 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-19 01:04:04.061083 | orchestrator | Friday 19 September 2025 01:01:36 +0000 (0:00:03.468) 0:00:14.546 ****** 2025-09-19 01:04:04.061094 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:04:04.061106 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-19 01:04:04.061117 | orchestrator | 2025-09-19 01:04:04.061128 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-19 01:04:04.061139 | orchestrator | Friday 19 September 2025 01:01:40 +0000 (0:00:03.978) 0:00:18.525 ****** 2025-09-19 01:04:04.061150 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:04:04.061161 | orchestrator | 2025-09-19 01:04:04.061172 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-19 01:04:04.061183 | orchestrator | Friday 19 September 2025 01:01:44 +0000 (0:00:03.457) 0:00:21.982 ****** 2025-09-19 01:04:04.061194 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-19 01:04:04.061211 | orchestrator | 2025-09-19 01:04:04.061222 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-19 01:04:04.061249 | orchestrator | Friday 19 September 2025 01:01:48 +0000 (0:00:04.046) 0:00:26.028 ****** 2025-09-19 01:04:04.061266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.061299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.061313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.061334 | orchestrator | 2025-09-19 01:04:04.061346 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 01:04:04.061356 | orchestrator | Friday 19 September 2025 01:01:51 +0000 (0:00:03.070) 0:00:29.099 ****** 2025-09-19 01:04:04.061368 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:04:04.061379 | orchestrator | 2025-09-19 01:04:04.061390 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-19 01:04:04.061401 | orchestrator | Friday 19 September 2025 01:01:51 +0000 (0:00:00.568) 0:00:29.668 ****** 2025-09-19 01:04:04.061412 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.061423 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:04:04.061434 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:04:04.061444 | orchestrator | 2025-09-19 01:04:04.061455 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-19 01:04:04.061466 | orchestrator | Friday 19 September 2025 01:01:55 +0000 (0:00:03.896) 0:00:33.564 ****** 2025-09-19 01:04:04.061477 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:04.061493 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:04.061505 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:04.061516 | orchestrator | 2025-09-19 01:04:04.061526 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-19 01:04:04.061537 | orchestrator | Friday 19 September 2025 01:01:57 +0000 (0:00:01.516) 0:00:35.081 ****** 2025-09-19 01:04:04.061548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:04.061559 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:04.061570 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:04.061580 | orchestrator | 2025-09-19 01:04:04.061591 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-19 01:04:04.061602 | orchestrator | Friday 19 September 2025 01:01:58 +0000 (0:00:01.298) 0:00:36.380 ****** 2025-09-19 01:04:04.061613 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:04:04.061624 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:04:04.061635 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:04:04.061654 | orchestrator | 2025-09-19 01:04:04.061671 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-19 01:04:04.061682 | orchestrator | Friday 19 September 2025 01:01:59 +0000 (0:00:01.273) 0:00:37.653 ****** 2025-09-19 01:04:04.061693 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.061704 | orchestrator | 2025-09-19 01:04:04.061715 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-19 01:04:04.061726 | orchestrator | Friday 19 September 2025 01:02:00 +0000 (0:00:00.286) 0:00:37.940 ****** 2025-09-19 01:04:04.061737 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.061748 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.061759 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.061769 | orchestrator | 2025-09-19 01:04:04.061780 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 01:04:04.061791 | orchestrator | Friday 19 September 2025 01:02:00 +0000 (0:00:00.757) 0:00:38.698 ****** 2025-09-19 01:04:04.061802 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:04:04.061813 | orchestrator | 2025-09-19 01:04:04.061823 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-19 01:04:04.061834 | orchestrator | Friday 19 September 2025 01:02:01 +0000 (0:00:00.586) 0:00:39.285 ****** 2025-09-19 01:04:04.061846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.061872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.061893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.061905 | orchestrator | 2025-09-19 01:04:04.061917 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-19 01:04:04.061927 | orchestrator | Friday 19 September 2025 01:02:05 +0000 (0:00:04.175) 0:00:43.461 ****** 2025-09-19 01:04:04.061947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 01:04:04.061966 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.061983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 01:04:04.061995 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 01:04:04.062093 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062105 | orchestrator | 2025-09-19 01:04:04.062116 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-19 01:04:04.062127 | orchestrator | Friday 19 September 2025 01:02:08 +0000 (0:00:02.833) 0:00:46.294 ****** 2025-09-19 01:04:04.062146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 01:04:04.062159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 01:04:04.062197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 01:04:04.062218 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062229 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062259 | orchestrator | 2025-09-19 01:04:04.062270 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-19 01:04:04.062281 | orchestrator | Friday 19 September 2025 01:02:12 +0000 (0:00:04.203) 0:00:50.498 ****** 2025-09-19 01:04:04.062292 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062303 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062314 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062325 | orchestrator | 2025-09-19 01:04:04.062335 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-19 01:04:04.062346 | orchestrator | Friday 19 September 2025 01:02:18 +0000 (0:00:05.745) 0:00:56.244 ****** 2025-09-19 01:04:04.062358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.062527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.062612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.062627 | orchestrator | 2025-09-19 01:04:04.062638 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-19 01:04:04.062648 | orchestrator | Friday 19 September 2025 01:02:22 +0000 (0:00:04.281) 0:01:00.525 ****** 2025-09-19 01:04:04.062657 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:04:04.062667 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:04:04.062676 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.062684 | orchestrator | 2025-09-19 01:04:04.062716 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-19 01:04:04.062725 | orchestrator | Friday 19 September 2025 01:02:28 +0000 (0:00:05.793) 0:01:06.319 ****** 2025-09-19 01:04:04.062734 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062743 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062752 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062761 | orchestrator | 2025-09-19 01:04:04.062769 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-19 01:04:04.062778 | orchestrator | Friday 19 September 2025 01:02:32 +0000 (0:00:03.631) 0:01:09.950 ****** 2025-09-19 01:04:04.062787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062796 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062805 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062813 | orchestrator | 2025-09-19 01:04:04.062822 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-19 01:04:04.062831 | orchestrator | Friday 19 September 2025 01:02:37 +0000 (0:00:05.199) 0:01:15.150 ****** 2025-09-19 01:04:04.062840 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062864 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062874 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062883 | orchestrator | 2025-09-19 01:04:04.062892 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-19 01:04:04.062901 | orchestrator | Friday 19 September 2025 01:02:42 +0000 (0:00:04.821) 0:01:19.971 ****** 2025-09-19 01:04:04.062910 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062919 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062927 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062936 | orchestrator | 2025-09-19 01:04:04.062945 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-19 01:04:04.062954 | orchestrator | Friday 19 September 2025 01:02:47 +0000 (0:00:05.510) 0:01:25.481 ****** 2025-09-19 01:04:04.062963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.062971 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.062980 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.062989 | orchestrator | 2025-09-19 01:04:04.062998 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-19 01:04:04.063007 | orchestrator | Friday 19 September 2025 01:02:48 +0000 (0:00:00.451) 0:01:25.933 ****** 2025-09-19 01:04:04.063016 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 01:04:04.063025 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.063042 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 01:04:04.063055 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.063065 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 01:04:04.063076 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.063086 | orchestrator | 2025-09-19 01:04:04.063098 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-19 01:04:04.063109 | orchestrator | Friday 19 September 2025 01:02:51 +0000 (0:00:03.465) 0:01:29.399 ****** 2025-09-19 01:04:04.063121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.063148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.063165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 01:04:04.063181 | orchestrator | 2025-09-19 01:04:04.063191 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 01:04:04.063200 | orchestrator | Friday 19 September 2025 01:02:55 +0000 (0:00:03.758) 0:01:33.157 ****** 2025-09-19 01:04:04.063209 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:04.063218 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:04.063226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:04.063261 | orchestrator | 2025-09-19 01:04:04.063278 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-19 01:04:04.063294 | orchestrator | Friday 19 September 2025 01:02:55 +0000 (0:00:00.328) 0:01:33.486 ****** 2025-09-19 01:04:04.063308 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.063321 | orchestrator | 2025-09-19 01:04:04.063330 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-19 01:04:04.063339 | orchestrator | Friday 19 September 2025 01:02:57 +0000 (0:00:02.306) 0:01:35.793 ****** 2025-09-19 01:04:04.063347 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.063356 | orchestrator | 2025-09-19 01:04:04.063365 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-19 01:04:04.063373 | orchestrator | Friday 19 September 2025 01:03:00 +0000 (0:00:02.278) 0:01:38.071 ****** 2025-09-19 01:04:04.063382 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.063391 | orchestrator | 2025-09-19 01:04:04.063399 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-19 01:04:04.063408 | orchestrator | Friday 19 September 2025 01:03:02 +0000 (0:00:02.166) 0:01:40.238 ****** 2025-09-19 01:04:04.063417 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.063426 | orchestrator | 2025-09-19 01:04:04.063435 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-19 01:04:04.063444 | orchestrator | Friday 19 September 2025 01:03:34 +0000 (0:00:32.306) 0:02:12.544 ****** 2025-09-19 01:04:04.063453 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.063461 | orchestrator | 2025-09-19 01:04:04.063470 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 01:04:04.063479 | orchestrator | Friday 19 September 2025 01:03:36 +0000 (0:00:02.004) 0:02:14.549 ****** 2025-09-19 01:04:04.063488 | orchestrator | 2025-09-19 01:04:04.063496 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 01:04:04.063505 | orchestrator | Friday 19 September 2025 01:03:36 +0000 (0:00:00.240) 0:02:14.789 ****** 2025-09-19 01:04:04.063514 | orchestrator | 2025-09-19 01:04:04.063530 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 01:04:04.063540 | orchestrator | Friday 19 September 2025 01:03:37 +0000 (0:00:00.062) 0:02:14.852 ****** 2025-09-19 01:04:04.063548 | orchestrator | 2025-09-19 01:04:04.063557 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-19 01:04:04.063566 | orchestrator | Friday 19 September 2025 01:03:37 +0000 (0:00:00.065) 0:02:14.917 ****** 2025-09-19 01:04:04.063575 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:04.063584 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:04:04.063593 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:04:04.063602 | orchestrator | 2025-09-19 01:04:04.063611 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:04:04.063621 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 01:04:04.063631 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 01:04:04.063648 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 01:04:04.063658 | orchestrator | 2025-09-19 01:04:04.063667 | orchestrator | 2025-09-19 01:04:04.063680 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:04:04.063690 | orchestrator | Friday 19 September 2025 01:04:02 +0000 (0:00:25.002) 0:02:39.919 ****** 2025-09-19 01:04:04.063698 | orchestrator | =============================================================================== 2025-09-19 01:04:04.063707 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 32.31s 2025-09-19 01:04:04.063716 | orchestrator | glance : Restart glance-api container ---------------------------------- 25.00s 2025-09-19 01:04:04.063725 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.40s 2025-09-19 01:04:04.063734 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.79s 2025-09-19 01:04:04.063742 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.75s 2025-09-19 01:04:04.063751 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.51s 2025-09-19 01:04:04.063760 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.20s 2025-09-19 01:04:04.063769 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.82s 2025-09-19 01:04:04.063778 | orchestrator | glance : Copying over config.json files for services -------------------- 4.28s 2025-09-19 01:04:04.063787 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.20s 2025-09-19 01:04:04.063795 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.18s 2025-09-19 01:04:04.063804 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.05s 2025-09-19 01:04:04.063813 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.98s 2025-09-19 01:04:04.063821 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.90s 2025-09-19 01:04:04.063830 | orchestrator | glance : Check glance containers ---------------------------------------- 3.76s 2025-09-19 01:04:04.063839 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.63s 2025-09-19 01:04:04.063848 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.47s 2025-09-19 01:04:04.063856 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.47s 2025-09-19 01:04:04.063865 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.46s 2025-09-19 01:04:04.063874 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.32s 2025-09-19 01:04:04.063883 | orchestrator | 2025-09-19 01:04:04 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:04.063893 | orchestrator | 2025-09-19 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:07.124629 | orchestrator | 2025-09-19 01:04:07 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:07.126998 | orchestrator | 2025-09-19 01:04:07 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:07.129460 | orchestrator | 2025-09-19 01:04:07 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:07.131868 | orchestrator | 2025-09-19 01:04:07 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:07.132265 | orchestrator | 2025-09-19 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:10.181756 | orchestrator | 2025-09-19 01:04:10 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:10.182587 | orchestrator | 2025-09-19 01:04:10 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:10.184560 | orchestrator | 2025-09-19 01:04:10 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:10.186221 | orchestrator | 2025-09-19 01:04:10 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:10.186339 | orchestrator | 2025-09-19 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:13.236562 | orchestrator | 2025-09-19 01:04:13 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:13.239586 | orchestrator | 2025-09-19 01:04:13 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:13.241888 | orchestrator | 2025-09-19 01:04:13 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:13.244499 | orchestrator | 2025-09-19 01:04:13 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:13.244545 | orchestrator | 2025-09-19 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:16.292093 | orchestrator | 2025-09-19 01:04:16 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:16.294162 | orchestrator | 2025-09-19 01:04:16 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:16.297678 | orchestrator | 2025-09-19 01:04:16 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:16.299405 | orchestrator | 2025-09-19 01:04:16 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:16.299426 | orchestrator | 2025-09-19 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:19.345730 | orchestrator | 2025-09-19 01:04:19 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:19.347414 | orchestrator | 2025-09-19 01:04:19 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:19.348221 | orchestrator | 2025-09-19 01:04:19 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:19.349159 | orchestrator | 2025-09-19 01:04:19 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:19.349182 | orchestrator | 2025-09-19 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:22.396784 | orchestrator | 2025-09-19 01:04:22 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:22.398199 | orchestrator | 2025-09-19 01:04:22 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:22.400215 | orchestrator | 2025-09-19 01:04:22 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:22.403399 | orchestrator | 2025-09-19 01:04:22 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:22.403436 | orchestrator | 2025-09-19 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:25.446710 | orchestrator | 2025-09-19 01:04:25 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:25.449400 | orchestrator | 2025-09-19 01:04:25 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:25.451547 | orchestrator | 2025-09-19 01:04:25 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:25.453539 | orchestrator | 2025-09-19 01:04:25 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:25.453558 | orchestrator | 2025-09-19 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:28.498767 | orchestrator | 2025-09-19 01:04:28 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:28.500160 | orchestrator | 2025-09-19 01:04:28 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:28.502205 | orchestrator | 2025-09-19 01:04:28 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:28.503934 | orchestrator | 2025-09-19 01:04:28 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:28.504193 | orchestrator | 2025-09-19 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:31.544212 | orchestrator | 2025-09-19 01:04:31 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:31.544606 | orchestrator | 2025-09-19 01:04:31 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:31.546362 | orchestrator | 2025-09-19 01:04:31 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:31.547598 | orchestrator | 2025-09-19 01:04:31 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:31.547633 | orchestrator | 2025-09-19 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:34.591236 | orchestrator | 2025-09-19 01:04:34 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:34.593558 | orchestrator | 2025-09-19 01:04:34 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:34.595764 | orchestrator | 2025-09-19 01:04:34 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:34.597564 | orchestrator | 2025-09-19 01:04:34 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:34.597688 | orchestrator | 2025-09-19 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:37.642190 | orchestrator | 2025-09-19 01:04:37 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:37.643079 | orchestrator | 2025-09-19 01:04:37 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:37.645033 | orchestrator | 2025-09-19 01:04:37 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:37.646802 | orchestrator | 2025-09-19 01:04:37 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state STARTED 2025-09-19 01:04:37.646882 | orchestrator | 2025-09-19 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:40.683974 | orchestrator | 2025-09-19 01:04:40 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:40.684603 | orchestrator | 2025-09-19 01:04:40 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:40.685783 | orchestrator | 2025-09-19 01:04:40 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:40.688180 | orchestrator | 2025-09-19 01:04:40 | INFO  | Task 2df3c146-4008-4035-a4f1-271f77c7d1e4 is in state SUCCESS 2025-09-19 01:04:40.689745 | orchestrator | 2025-09-19 01:04:40.689777 | orchestrator | 2025-09-19 01:04:40.689790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:04:40.689803 | orchestrator | 2025-09-19 01:04:40.689815 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:04:40.689827 | orchestrator | Friday 19 September 2025 01:01:33 +0000 (0:00:00.307) 0:00:00.307 ****** 2025-09-19 01:04:40.689838 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:04:40.689850 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:04:40.689861 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:04:40.689871 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:04:40.689937 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:04:40.689950 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:04:40.689961 | orchestrator | 2025-09-19 01:04:40.689972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:04:40.689983 | orchestrator | Friday 19 September 2025 01:01:34 +0000 (0:00:01.412) 0:00:01.719 ****** 2025-09-19 01:04:40.689994 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-19 01:04:40.690006 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-19 01:04:40.690075 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-19 01:04:40.690088 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-19 01:04:40.690099 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-19 01:04:40.690110 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-19 01:04:40.690121 | orchestrator | 2025-09-19 01:04:40.690132 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-19 01:04:40.690143 | orchestrator | 2025-09-19 01:04:40.690154 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 01:04:40.690164 | orchestrator | Friday 19 September 2025 01:01:36 +0000 (0:00:01.896) 0:00:03.616 ****** 2025-09-19 01:04:40.690176 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:04:40.690189 | orchestrator | 2025-09-19 01:04:40.690200 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-19 01:04:40.690211 | orchestrator | Friday 19 September 2025 01:01:38 +0000 (0:00:02.220) 0:00:05.836 ****** 2025-09-19 01:04:40.690222 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-19 01:04:40.690233 | orchestrator | 2025-09-19 01:04:40.690416 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-19 01:04:40.690448 | orchestrator | Friday 19 September 2025 01:01:42 +0000 (0:00:03.503) 0:00:09.339 ****** 2025-09-19 01:04:40.690470 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-19 01:04:40.690492 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-19 01:04:40.690511 | orchestrator | 2025-09-19 01:04:40.690531 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-19 01:04:40.690551 | orchestrator | Friday 19 September 2025 01:01:48 +0000 (0:00:06.701) 0:00:16.040 ****** 2025-09-19 01:04:40.690571 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:04:40.690587 | orchestrator | 2025-09-19 01:04:40.690601 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-19 01:04:40.690615 | orchestrator | Friday 19 September 2025 01:01:52 +0000 (0:00:03.593) 0:00:19.634 ****** 2025-09-19 01:04:40.690628 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:04:40.690640 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-19 01:04:40.690653 | orchestrator | 2025-09-19 01:04:40.690667 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-19 01:04:40.690680 | orchestrator | Friday 19 September 2025 01:01:56 +0000 (0:00:04.190) 0:00:23.824 ****** 2025-09-19 01:04:40.690692 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:04:40.690705 | orchestrator | 2025-09-19 01:04:40.690715 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-19 01:04:40.690726 | orchestrator | Friday 19 September 2025 01:02:00 +0000 (0:00:03.609) 0:00:27.433 ****** 2025-09-19 01:04:40.690737 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-19 01:04:40.690747 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-19 01:04:40.690758 | orchestrator | 2025-09-19 01:04:40.690769 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-19 01:04:40.690780 | orchestrator | Friday 19 September 2025 01:02:08 +0000 (0:00:08.087) 0:00:35.521 ****** 2025-09-19 01:04:40.690837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.690860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.690908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.690925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.690937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.690962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.690997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.691018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.691069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.691082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.691107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.691126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.691138 | orchestrator | 2025-09-19 01:04:40.691150 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 01:04:40.691161 | orchestrator | Friday 19 September 2025 01:02:11 +0000 (0:00:02.969) 0:00:38.490 ****** 2025-09-19 01:04:40.691172 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.691183 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.691193 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.691204 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.691215 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.691225 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.691236 | orchestrator | 2025-09-19 01:04:40.691268 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 01:04:40.691281 | orchestrator | Friday 19 September 2025 01:02:12 +0000 (0:00:00.882) 0:00:39.373 ****** 2025-09-19 01:04:40.691292 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.691302 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.691313 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.691324 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:04:40.691334 | orchestrator | 2025-09-19 01:04:40.691345 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-19 01:04:40.691356 | orchestrator | Friday 19 September 2025 01:02:12 +0000 (0:00:00.812) 0:00:40.186 ****** 2025-09-19 01:04:40.691367 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-19 01:04:40.691378 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-19 01:04:40.691389 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-19 01:04:40.691399 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-19 01:04:40.691410 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-19 01:04:40.691420 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-19 01:04:40.691431 | orchestrator | 2025-09-19 01:04:40.691441 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-19 01:04:40.691452 | orchestrator | Friday 19 September 2025 01:02:15 +0000 (0:00:02.388) 0:00:42.574 ****** 2025-09-19 01:04:40.691464 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 01:04:40.691484 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 01:04:40.691506 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 01:04:40.691518 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 01:04:40.691531 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 01:04:40.691542 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 01:04:40.691561 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 01:04:40.691583 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 01:04:40.691595 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 01:04:40.691607 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 01:04:40.691624 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 01:04:40.691646 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 01:04:40.691658 | orchestrator | 2025-09-19 01:04:40.691669 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-19 01:04:40.691680 | orchestrator | Friday 19 September 2025 01:02:19 +0000 (0:00:04.330) 0:00:46.904 ****** 2025-09-19 01:04:40.691692 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:40.691703 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:40.691714 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 01:04:40.691725 | orchestrator | 2025-09-19 01:04:40.691736 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-19 01:04:40.691747 | orchestrator | Friday 19 September 2025 01:02:21 +0000 (0:00:01.939) 0:00:48.844 ****** 2025-09-19 01:04:40.691764 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-19 01:04:40.691776 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-19 01:04:40.691786 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-19 01:04:40.691797 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 01:04:40.691808 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 01:04:40.691818 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 01:04:40.691829 | orchestrator | 2025-09-19 01:04:40.691840 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-19 01:04:40.691851 | orchestrator | Friday 19 September 2025 01:02:24 +0000 (0:00:02.994) 0:00:51.838 ****** 2025-09-19 01:04:40.691861 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-19 01:04:40.691872 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-19 01:04:40.691883 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-19 01:04:40.691894 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-19 01:04:40.691904 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-19 01:04:40.691915 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-19 01:04:40.691932 | orchestrator | 2025-09-19 01:04:40.691943 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-19 01:04:40.691953 | orchestrator | Friday 19 September 2025 01:02:25 +0000 (0:00:01.123) 0:00:52.962 ****** 2025-09-19 01:04:40.691964 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.691975 | orchestrator | 2025-09-19 01:04:40.691985 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-19 01:04:40.691996 | orchestrator | Friday 19 September 2025 01:02:25 +0000 (0:00:00.132) 0:00:53.094 ****** 2025-09-19 01:04:40.692007 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.692018 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.692028 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.692039 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.692050 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.692060 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.692071 | orchestrator | 2025-09-19 01:04:40.692082 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 01:04:40.692093 | orchestrator | Friday 19 September 2025 01:02:26 +0000 (0:00:00.775) 0:00:53.870 ****** 2025-09-19 01:04:40.692104 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:04:40.692116 | orchestrator | 2025-09-19 01:04:40.692126 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-19 01:04:40.692137 | orchestrator | Friday 19 September 2025 01:02:28 +0000 (0:00:01.585) 0:00:55.455 ****** 2025-09-19 01:04:40.692149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.692165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.692185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.692214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.692861 | orchestrator | 2025-09-19 01:04:40.692872 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-19 01:04:40.692883 | orchestrator | Friday 19 September 2025 01:02:32 +0000 (0:00:03.780) 0:00:59.235 ****** 2025-09-19 01:04:40.692907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.692926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.692937 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.692949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.692961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.692972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.692988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693000 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.693017 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.693035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693058 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.693070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693093 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.693110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693147 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.693158 | orchestrator | 2025-09-19 01:04:40.693169 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-19 01:04:40.693180 | orchestrator | Friday 19 September 2025 01:02:33 +0000 (0:00:01.616) 0:01:00.852 ****** 2025-09-19 01:04:40.693192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.693204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693215 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.693226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693312 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.693332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.693344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693355 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.693367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.693379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693391 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.693407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693447 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.693458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.693481 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.693492 | orchestrator | 2025-09-19 01:04:40.693503 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-19 01:04:40.693514 | orchestrator | Friday 19 September 2025 01:02:35 +0000 (0:00:01.947) 0:01:02.799 ****** 2025-09-19 01:04:40.693526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.693547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.693563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.693584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693686 | orchestrator | 2025-09-19 01:04:40.693696 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-19 01:04:40.693706 | orchestrator | Friday 19 September 2025 01:02:39 +0000 (0:00:03.466) 0:01:06.266 ****** 2025-09-19 01:04:40.693720 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 01:04:40.693730 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.693740 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 01:04:40.693750 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.693760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 01:04:40.693770 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 01:04:40.693780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 01:04:40.693790 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 01:04:40.693804 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.693814 | orchestrator | 2025-09-19 01:04:40.693824 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-19 01:04:40.693833 | orchestrator | Friday 19 September 2025 01:02:41 +0000 (0:00:02.349) 0:01:08.615 ****** 2025-09-19 01:04:40.693843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.693884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.693899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.693910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693988 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.693998 | orchestrator | 2025-09-19 01:04:40.694008 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-19 01:04:40.694047 | orchestrator | Friday 19 September 2025 01:02:50 +0000 (0:00:09.386) 0:01:18.002 ****** 2025-09-19 01:04:40.694059 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.694069 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.694085 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.694095 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:04:40.694105 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:04:40.694114 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:04:40.694123 | orchestrator | 2025-09-19 01:04:40.694133 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-19 01:04:40.694143 | orchestrator | Friday 19 September 2025 01:02:52 +0000 (0:00:02.093) 0:01:20.095 ****** 2025-09-19 01:04:40.694153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.694163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.694194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.694215 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.694225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 01:04:40.694240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694266 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.694277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694302 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.694318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694348 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.694358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 01:04:40.694379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.694388 | orchestrator | 2025-09-19 01:04:40.694398 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-19 01:04:40.694408 | orchestrator | Friday 19 September 2025 01:02:54 +0000 (0:00:01.291) 0:01:21.386 ****** 2025-09-19 01:04:40.694418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.694427 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.694437 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.694451 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.694461 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.694470 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.694480 | orchestrator | 2025-09-19 01:04:40.694489 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-19 01:04:40.694499 | orchestrator | Friday 19 September 2025 01:02:54 +0000 (0:00:00.650) 0:01:22.037 ****** 2025-09-19 01:04:40.694515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.694531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.694541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 01:04:40.694551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 01:04:40.694680 | orchestrator | 2025-09-19 01:04:40.694689 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 01:04:40.694699 | orchestrator | Friday 19 September 2025 01:02:57 +0000 (0:00:02.365) 0:01:24.402 ****** 2025-09-19 01:04:40.694709 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.694719 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:04:40.694728 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:04:40.694738 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:04:40.694747 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:04:40.694757 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:04:40.694767 | orchestrator | 2025-09-19 01:04:40.694776 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-19 01:04:40.694786 | orchestrator | Friday 19 September 2025 01:02:57 +0000 (0:00:00.592) 0:01:24.995 ****** 2025-09-19 01:04:40.694796 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:40.694805 | orchestrator | 2025-09-19 01:04:40.694815 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-19 01:04:40.694825 | orchestrator | Friday 19 September 2025 01:02:59 +0000 (0:00:02.121) 0:01:27.116 ****** 2025-09-19 01:04:40.694834 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:40.694844 | orchestrator | 2025-09-19 01:04:40.694853 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-19 01:04:40.694863 | orchestrator | Friday 19 September 2025 01:03:02 +0000 (0:00:02.330) 0:01:29.447 ****** 2025-09-19 01:04:40.694872 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:40.694882 | orchestrator | 2025-09-19 01:04:40.694892 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 01:04:40.694901 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:22.824) 0:01:52.271 ****** 2025-09-19 01:04:40.694911 | orchestrator | 2025-09-19 01:04:40.694920 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 01:04:40.694930 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:00.064) 0:01:52.335 ****** 2025-09-19 01:04:40.694940 | orchestrator | 2025-09-19 01:04:40.694949 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 01:04:40.694959 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:00.065) 0:01:52.401 ****** 2025-09-19 01:04:40.694968 | orchestrator | 2025-09-19 01:04:40.694978 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 01:04:40.694988 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:00.060) 0:01:52.461 ****** 2025-09-19 01:04:40.694997 | orchestrator | 2025-09-19 01:04:40.695007 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 01:04:40.695016 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:00.066) 0:01:52.528 ****** 2025-09-19 01:04:40.695026 | orchestrator | 2025-09-19 01:04:40.695036 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 01:04:40.695045 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:00.064) 0:01:52.592 ****** 2025-09-19 01:04:40.695055 | orchestrator | 2025-09-19 01:04:40.695064 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-19 01:04:40.695074 | orchestrator | Friday 19 September 2025 01:03:25 +0000 (0:00:00.066) 0:01:52.659 ****** 2025-09-19 01:04:40.695084 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:40.695093 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:04:40.695103 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:04:40.695118 | orchestrator | 2025-09-19 01:04:40.695128 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-19 01:04:40.695137 | orchestrator | Friday 19 September 2025 01:03:44 +0000 (0:00:19.045) 0:02:11.705 ****** 2025-09-19 01:04:40.695147 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:04:40.695157 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:04:40.695166 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:04:40.695176 | orchestrator | 2025-09-19 01:04:40.695186 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-19 01:04:40.695195 | orchestrator | Friday 19 September 2025 01:03:55 +0000 (0:00:10.524) 0:02:22.229 ****** 2025-09-19 01:04:40.695205 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:04:40.695214 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:04:40.695228 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:04:40.695237 | orchestrator | 2025-09-19 01:04:40.695259 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-19 01:04:40.695270 | orchestrator | Friday 19 September 2025 01:04:31 +0000 (0:00:36.604) 0:02:58.833 ****** 2025-09-19 01:04:40.695279 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:04:40.695289 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:04:40.695299 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:04:40.695308 | orchestrator | 2025-09-19 01:04:40.695318 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-19 01:04:40.695328 | orchestrator | Friday 19 September 2025 01:04:37 +0000 (0:00:05.654) 0:03:04.488 ****** 2025-09-19 01:04:40.695338 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:04:40.695347 | orchestrator | 2025-09-19 01:04:40.695357 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:04:40.695372 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 01:04:40.695382 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 01:04:40.695393 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 01:04:40.695402 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 01:04:40.695412 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 01:04:40.695422 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 01:04:40.695431 | orchestrator | 2025-09-19 01:04:40.695441 | orchestrator | 2025-09-19 01:04:40.695451 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:04:40.695460 | orchestrator | Friday 19 September 2025 01:04:37 +0000 (0:00:00.602) 0:03:05.090 ****** 2025-09-19 01:04:40.695470 | orchestrator | =============================================================================== 2025-09-19 01:04:40.695480 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 36.60s 2025-09-19 01:04:40.695490 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.82s 2025-09-19 01:04:40.695499 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.05s 2025-09-19 01:04:40.695509 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.52s 2025-09-19 01:04:40.695518 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.39s 2025-09-19 01:04:40.695528 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.09s 2025-09-19 01:04:40.695538 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.70s 2025-09-19 01:04:40.695554 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.65s 2025-09-19 01:04:40.695563 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.33s 2025-09-19 01:04:40.695573 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.19s 2025-09-19 01:04:40.695582 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.78s 2025-09-19 01:04:40.695592 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.61s 2025-09-19 01:04:40.695601 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.59s 2025-09-19 01:04:40.695611 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.50s 2025-09-19 01:04:40.695621 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.47s 2025-09-19 01:04:40.695630 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.99s 2025-09-19 01:04:40.695640 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.97s 2025-09-19 01:04:40.695650 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.39s 2025-09-19 01:04:40.695659 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.37s 2025-09-19 01:04:40.695669 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.35s 2025-09-19 01:04:40.695678 | orchestrator | 2025-09-19 01:04:40 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:40.695688 | orchestrator | 2025-09-19 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:43.734235 | orchestrator | 2025-09-19 01:04:43 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:43.735663 | orchestrator | 2025-09-19 01:04:43 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:43.737238 | orchestrator | 2025-09-19 01:04:43 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:43.738366 | orchestrator | 2025-09-19 01:04:43 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:43.738658 | orchestrator | 2025-09-19 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:46.779075 | orchestrator | 2025-09-19 01:04:46 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:46.779306 | orchestrator | 2025-09-19 01:04:46 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:46.781129 | orchestrator | 2025-09-19 01:04:46 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:46.782100 | orchestrator | 2025-09-19 01:04:46 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:46.782125 | orchestrator | 2025-09-19 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:49.841372 | orchestrator | 2025-09-19 01:04:49 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:49.841647 | orchestrator | 2025-09-19 01:04:49 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:49.843060 | orchestrator | 2025-09-19 01:04:49 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:49.844561 | orchestrator | 2025-09-19 01:04:49 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:49.844586 | orchestrator | 2025-09-19 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:52.891427 | orchestrator | 2025-09-19 01:04:52 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:52.893355 | orchestrator | 2025-09-19 01:04:52 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:52.895403 | orchestrator | 2025-09-19 01:04:52 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:52.897331 | orchestrator | 2025-09-19 01:04:52 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:52.897363 | orchestrator | 2025-09-19 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:55.934157 | orchestrator | 2025-09-19 01:04:55 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:55.934570 | orchestrator | 2025-09-19 01:04:55 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:55.935147 | orchestrator | 2025-09-19 01:04:55 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:55.936282 | orchestrator | 2025-09-19 01:04:55 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:55.936315 | orchestrator | 2025-09-19 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:04:58.991896 | orchestrator | 2025-09-19 01:04:58 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state STARTED 2025-09-19 01:04:58.993662 | orchestrator | 2025-09-19 01:04:58 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:04:58.995497 | orchestrator | 2025-09-19 01:04:58 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:04:58.997464 | orchestrator | 2025-09-19 01:04:58 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:04:58.997564 | orchestrator | 2025-09-19 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:02.041263 | orchestrator | 2025-09-19 01:05:02.041416 | orchestrator | 2025-09-19 01:05:02.041433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:05:02.041445 | orchestrator | 2025-09-19 01:05:02.041457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:05:02.041468 | orchestrator | Friday 19 September 2025 01:04:06 +0000 (0:00:00.260) 0:00:00.260 ****** 2025-09-19 01:05:02.041479 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:05:02.041491 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:05:02.041502 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:05:02.041512 | orchestrator | 2025-09-19 01:05:02.041523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:05:02.041534 | orchestrator | Friday 19 September 2025 01:04:06 +0000 (0:00:00.306) 0:00:00.566 ****** 2025-09-19 01:05:02.041545 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-19 01:05:02.041557 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-19 01:05:02.041568 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-19 01:05:02.041578 | orchestrator | 2025-09-19 01:05:02.041589 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-19 01:05:02.041600 | orchestrator | 2025-09-19 01:05:02.041611 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 01:05:02.041621 | orchestrator | Friday 19 September 2025 01:04:07 +0000 (0:00:00.413) 0:00:00.979 ****** 2025-09-19 01:05:02.041633 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:05:02.041645 | orchestrator | 2025-09-19 01:05:02.041672 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-19 01:05:02.041684 | orchestrator | Friday 19 September 2025 01:04:07 +0000 (0:00:00.535) 0:00:01.515 ****** 2025-09-19 01:05:02.041695 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-19 01:05:02.041705 | orchestrator | 2025-09-19 01:05:02.041716 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-19 01:05:02.041757 | orchestrator | Friday 19 September 2025 01:04:10 +0000 (0:00:03.202) 0:00:04.717 ****** 2025-09-19 01:05:02.041776 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-19 01:05:02.041795 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-19 01:05:02.041812 | orchestrator | 2025-09-19 01:05:02.041830 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-19 01:05:02.041847 | orchestrator | Friday 19 September 2025 01:04:16 +0000 (0:00:06.148) 0:00:10.865 ****** 2025-09-19 01:05:02.041865 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:05:02.041883 | orchestrator | 2025-09-19 01:05:02.041900 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-19 01:05:02.041917 | orchestrator | Friday 19 September 2025 01:04:20 +0000 (0:00:03.374) 0:00:14.240 ****** 2025-09-19 01:05:02.041934 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:05:02.041952 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 01:05:02.041968 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 01:05:02.041986 | orchestrator | 2025-09-19 01:05:02.042004 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-19 01:05:02.042103 | orchestrator | Friday 19 September 2025 01:04:27 +0000 (0:00:07.578) 0:00:21.819 ****** 2025-09-19 01:05:02.042125 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:05:02.042154 | orchestrator | 2025-09-19 01:05:02.042173 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-19 01:05:02.042192 | orchestrator | Friday 19 September 2025 01:04:31 +0000 (0:00:03.064) 0:00:24.884 ****** 2025-09-19 01:05:02.042211 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 01:05:02.042231 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 01:05:02.042251 | orchestrator | 2025-09-19 01:05:02.042271 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-19 01:05:02.042330 | orchestrator | Friday 19 September 2025 01:04:38 +0000 (0:00:07.290) 0:00:32.174 ****** 2025-09-19 01:05:02.042349 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-19 01:05:02.042369 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-19 01:05:02.042387 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-19 01:05:02.042406 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-19 01:05:02.042425 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-19 01:05:02.042445 | orchestrator | 2025-09-19 01:05:02.042465 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 01:05:02.042483 | orchestrator | Friday 19 September 2025 01:04:54 +0000 (0:00:16.506) 0:00:48.681 ****** 2025-09-19 01:05:02.042501 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:05:02.042518 | orchestrator | 2025-09-19 01:05:02.042536 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-19 01:05:02.042555 | orchestrator | Friday 19 September 2025 01:04:55 +0000 (0:00:00.554) 0:00:49.235 ****** 2025-09-19 01:05:02.042573 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-09-19 01:05:02.042649 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1758243896.9237163-6696-164435159952746/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1758243896.9237163-6696-164435159952746/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1758243896.9237163-6696-164435159952746/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_4dklz8__/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_4dklz8__/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_4dklz8__/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_4dklz8__/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-09-19 01:05:02.042694 | orchestrator | 2025-09-19 01:05:02.042717 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:05:02.042737 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 01:05:02.042760 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:05:02.042781 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:05:02.042801 | orchestrator | 2025-09-19 01:05:02.042822 | orchestrator | 2025-09-19 01:05:02.042841 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:05:02.042861 | orchestrator | Friday 19 September 2025 01:04:59 +0000 (0:00:03.686) 0:00:52.922 ****** 2025-09-19 01:05:02.042891 | orchestrator | =============================================================================== 2025-09-19 01:05:02.042922 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.51s 2025-09-19 01:05:02.042942 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.58s 2025-09-19 01:05:02.042961 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.29s 2025-09-19 01:05:02.042982 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.15s 2025-09-19 01:05:02.043000 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.69s 2025-09-19 01:05:02.043018 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.37s 2025-09-19 01:05:02.043037 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.20s 2025-09-19 01:05:02.043058 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.06s 2025-09-19 01:05:02.043079 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-09-19 01:05:02.043098 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.54s 2025-09-19 01:05:02.043118 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-09-19 01:05:02.043138 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-19 01:05:02.043159 | orchestrator | 2025-09-19 01:05:02 | INFO  | Task e7ff293b-b61f-45e7-916e-90f18e292931 is in state SUCCESS 2025-09-19 01:05:02.043618 | orchestrator | 2025-09-19 01:05:02 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:02.046319 | orchestrator | 2025-09-19 01:05:02 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:02.051163 | orchestrator | 2025-09-19 01:05:02 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:02.051223 | orchestrator | 2025-09-19 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:05.090841 | orchestrator | 2025-09-19 01:05:05 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:05.092814 | orchestrator | 2025-09-19 01:05:05 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:05.094005 | orchestrator | 2025-09-19 01:05:05 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:05.094156 | orchestrator | 2025-09-19 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:08.124655 | orchestrator | 2025-09-19 01:05:08 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:08.125571 | orchestrator | 2025-09-19 01:05:08 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:08.127000 | orchestrator | 2025-09-19 01:05:08 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:08.127148 | orchestrator | 2025-09-19 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:11.164186 | orchestrator | 2025-09-19 01:05:11 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:11.165275 | orchestrator | 2025-09-19 01:05:11 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:11.165954 | orchestrator | 2025-09-19 01:05:11 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:11.166231 | orchestrator | 2025-09-19 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:14.202571 | orchestrator | 2025-09-19 01:05:14 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:14.203880 | orchestrator | 2025-09-19 01:05:14 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:14.204557 | orchestrator | 2025-09-19 01:05:14 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:14.204589 | orchestrator | 2025-09-19 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:17.240521 | orchestrator | 2025-09-19 01:05:17 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:17.241961 | orchestrator | 2025-09-19 01:05:17 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:17.243234 | orchestrator | 2025-09-19 01:05:17 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:17.243676 | orchestrator | 2025-09-19 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:20.290811 | orchestrator | 2025-09-19 01:05:20 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:20.292692 | orchestrator | 2025-09-19 01:05:20 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:20.294605 | orchestrator | 2025-09-19 01:05:20 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:20.294680 | orchestrator | 2025-09-19 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:23.339676 | orchestrator | 2025-09-19 01:05:23 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:23.341275 | orchestrator | 2025-09-19 01:05:23 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:23.343511 | orchestrator | 2025-09-19 01:05:23 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:23.343551 | orchestrator | 2025-09-19 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:26.382528 | orchestrator | 2025-09-19 01:05:26 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:26.383844 | orchestrator | 2025-09-19 01:05:26 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:26.385871 | orchestrator | 2025-09-19 01:05:26 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:26.386163 | orchestrator | 2025-09-19 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:29.429490 | orchestrator | 2025-09-19 01:05:29 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:29.429574 | orchestrator | 2025-09-19 01:05:29 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:29.429589 | orchestrator | 2025-09-19 01:05:29 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:29.430816 | orchestrator | 2025-09-19 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:32.464556 | orchestrator | 2025-09-19 01:05:32 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:32.465486 | orchestrator | 2025-09-19 01:05:32 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:32.466865 | orchestrator | 2025-09-19 01:05:32 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:32.466892 | orchestrator | 2025-09-19 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:35.499186 | orchestrator | 2025-09-19 01:05:35 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:35.502083 | orchestrator | 2025-09-19 01:05:35 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:35.503474 | orchestrator | 2025-09-19 01:05:35 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:35.503507 | orchestrator | 2025-09-19 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:38.543533 | orchestrator | 2025-09-19 01:05:38 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:38.545546 | orchestrator | 2025-09-19 01:05:38 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:38.547311 | orchestrator | 2025-09-19 01:05:38 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:38.547347 | orchestrator | 2025-09-19 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:41.586982 | orchestrator | 2025-09-19 01:05:41 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:41.587315 | orchestrator | 2025-09-19 01:05:41 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:41.588360 | orchestrator | 2025-09-19 01:05:41 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:41.588387 | orchestrator | 2025-09-19 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:44.629402 | orchestrator | 2025-09-19 01:05:44 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:44.630801 | orchestrator | 2025-09-19 01:05:44 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:44.632621 | orchestrator | 2025-09-19 01:05:44 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:44.632656 | orchestrator | 2025-09-19 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:47.677952 | orchestrator | 2025-09-19 01:05:47 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:47.678823 | orchestrator | 2025-09-19 01:05:47 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:47.680097 | orchestrator | 2025-09-19 01:05:47 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:47.680131 | orchestrator | 2025-09-19 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:50.732921 | orchestrator | 2025-09-19 01:05:50 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:50.734605 | orchestrator | 2025-09-19 01:05:50 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:50.736891 | orchestrator | 2025-09-19 01:05:50 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:50.737143 | orchestrator | 2025-09-19 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:53.777551 | orchestrator | 2025-09-19 01:05:53 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:53.778580 | orchestrator | 2025-09-19 01:05:53 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:53.780571 | orchestrator | 2025-09-19 01:05:53 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:53.780753 | orchestrator | 2025-09-19 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:56.826950 | orchestrator | 2025-09-19 01:05:56 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:56.828584 | orchestrator | 2025-09-19 01:05:56 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:56.831289 | orchestrator | 2025-09-19 01:05:56 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:56.831368 | orchestrator | 2025-09-19 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:05:59.869733 | orchestrator | 2025-09-19 01:05:59 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:05:59.871150 | orchestrator | 2025-09-19 01:05:59 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:05:59.872825 | orchestrator | 2025-09-19 01:05:59 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:05:59.873132 | orchestrator | 2025-09-19 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:02.913378 | orchestrator | 2025-09-19 01:06:02 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:02.914216 | orchestrator | 2025-09-19 01:06:02 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:02.916462 | orchestrator | 2025-09-19 01:06:02 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:02.916660 | orchestrator | 2025-09-19 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:05.956029 | orchestrator | 2025-09-19 01:06:05 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:05.957413 | orchestrator | 2025-09-19 01:06:05 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:05.958573 | orchestrator | 2025-09-19 01:06:05 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:05.958869 | orchestrator | 2025-09-19 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:09.015895 | orchestrator | 2025-09-19 01:06:09 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:09.022390 | orchestrator | 2025-09-19 01:06:09 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:09.028360 | orchestrator | 2025-09-19 01:06:09 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:09.028413 | orchestrator | 2025-09-19 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:12.083408 | orchestrator | 2025-09-19 01:06:12 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:12.084796 | orchestrator | 2025-09-19 01:06:12 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:12.086224 | orchestrator | 2025-09-19 01:06:12 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:12.087186 | orchestrator | 2025-09-19 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:15.129403 | orchestrator | 2025-09-19 01:06:15 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:15.130000 | orchestrator | 2025-09-19 01:06:15 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:15.132329 | orchestrator | 2025-09-19 01:06:15 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:15.132393 | orchestrator | 2025-09-19 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:18.177925 | orchestrator | 2025-09-19 01:06:18 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:18.179822 | orchestrator | 2025-09-19 01:06:18 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:18.181500 | orchestrator | 2025-09-19 01:06:18 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:18.181982 | orchestrator | 2025-09-19 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:21.238848 | orchestrator | 2025-09-19 01:06:21 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:21.240584 | orchestrator | 2025-09-19 01:06:21 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:21.242846 | orchestrator | 2025-09-19 01:06:21 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:21.243585 | orchestrator | 2025-09-19 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:24.292945 | orchestrator | 2025-09-19 01:06:24 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:24.295502 | orchestrator | 2025-09-19 01:06:24 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:24.300383 | orchestrator | 2025-09-19 01:06:24 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:24.300457 | orchestrator | 2025-09-19 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:27.345593 | orchestrator | 2025-09-19 01:06:27 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:27.347196 | orchestrator | 2025-09-19 01:06:27 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:27.349014 | orchestrator | 2025-09-19 01:06:27 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:27.349040 | orchestrator | 2025-09-19 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:30.394267 | orchestrator | 2025-09-19 01:06:30 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:30.396667 | orchestrator | 2025-09-19 01:06:30 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:30.398620 | orchestrator | 2025-09-19 01:06:30 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:30.398883 | orchestrator | 2025-09-19 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:33.443472 | orchestrator | 2025-09-19 01:06:33 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:33.445019 | orchestrator | 2025-09-19 01:06:33 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:33.446670 | orchestrator | 2025-09-19 01:06:33 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:33.446724 | orchestrator | 2025-09-19 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:36.482747 | orchestrator | 2025-09-19 01:06:36 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:36.483004 | orchestrator | 2025-09-19 01:06:36 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:36.483970 | orchestrator | 2025-09-19 01:06:36 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:36.484000 | orchestrator | 2025-09-19 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:39.527975 | orchestrator | 2025-09-19 01:06:39 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:39.530149 | orchestrator | 2025-09-19 01:06:39 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:39.532695 | orchestrator | 2025-09-19 01:06:39 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:39.533039 | orchestrator | 2025-09-19 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:42.571703 | orchestrator | 2025-09-19 01:06:42 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:42.572817 | orchestrator | 2025-09-19 01:06:42 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state STARTED 2025-09-19 01:06:42.575048 | orchestrator | 2025-09-19 01:06:42 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:42.575063 | orchestrator | 2025-09-19 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:45.618894 | orchestrator | 2025-09-19 01:06:45 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:45.620279 | orchestrator | 2025-09-19 01:06:45 | INFO  | Task aeb05a56-aaaa-4f1f-bf07-7d6dd2b86a38 is in state SUCCESS 2025-09-19 01:06:45.621117 | orchestrator | 2025-09-19 01:06:45 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:45.621369 | orchestrator | 2025-09-19 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:48.664545 | orchestrator | 2025-09-19 01:06:48 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:48.666160 | orchestrator | 2025-09-19 01:06:48 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:48.666201 | orchestrator | 2025-09-19 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:51.709829 | orchestrator | 2025-09-19 01:06:51 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:51.711436 | orchestrator | 2025-09-19 01:06:51 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:51.711977 | orchestrator | 2025-09-19 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:54.760030 | orchestrator | 2025-09-19 01:06:54 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:54.760116 | orchestrator | 2025-09-19 01:06:54 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:54.760128 | orchestrator | 2025-09-19 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:06:57.798401 | orchestrator | 2025-09-19 01:06:57 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:06:57.798500 | orchestrator | 2025-09-19 01:06:57 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:06:57.798515 | orchestrator | 2025-09-19 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:00.846574 | orchestrator | 2025-09-19 01:07:00 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:00.846861 | orchestrator | 2025-09-19 01:07:00 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state STARTED 2025-09-19 01:07:00.846956 | orchestrator | 2025-09-19 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:03.890858 | orchestrator | 2025-09-19 01:07:03 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:03.894538 | orchestrator | 2025-09-19 01:07:03 | INFO  | Task 0190091f-407c-4c75-aa47-febcb9d85658 is in state SUCCESS 2025-09-19 01:07:03.896910 | orchestrator | 2025-09-19 01:07:03.896985 | orchestrator | 2025-09-19 01:07:03.897005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:07:03.897063 | orchestrator | 2025-09-19 01:07:03.897157 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:07:03.897176 | orchestrator | Friday 19 September 2025 01:02:42 +0000 (0:00:00.400) 0:00:00.400 ****** 2025-09-19 01:07:03.897195 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:07:03.897210 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:07:03.897221 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:07:03.897327 | orchestrator | 2025-09-19 01:07:03.897341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:07:03.897352 | orchestrator | Friday 19 September 2025 01:02:43 +0000 (0:00:00.762) 0:00:01.163 ****** 2025-09-19 01:07:03.897363 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 01:07:03.897374 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 01:07:03.897385 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 01:07:03.897395 | orchestrator | 2025-09-19 01:07:03.897406 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-19 01:07:03.897417 | orchestrator | 2025-09-19 01:07:03.897428 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-19 01:07:03.897440 | orchestrator | Friday 19 September 2025 01:02:44 +0000 (0:00:01.335) 0:00:02.498 ****** 2025-09-19 01:07:03.897450 | orchestrator | 2025-09-19 01:07:03.897474 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-19 01:07:03.897485 | orchestrator | 2025-09-19 01:07:03.897519 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-19 01:07:03.897532 | orchestrator | 2025-09-19 01:07:03.897544 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-19 01:07:03.897557 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:07:03.897570 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:07:03.897582 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:07:03.897595 | orchestrator | 2025-09-19 01:07:03.897608 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:07:03.897623 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:07:03.897673 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:07:03.897687 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:07:03.897699 | orchestrator | 2025-09-19 01:07:03.897712 | orchestrator | 2025-09-19 01:07:03.897725 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:07:03.897738 | orchestrator | Friday 19 September 2025 01:06:44 +0000 (0:04:00.125) 0:04:02.623 ****** 2025-09-19 01:07:03.897750 | orchestrator | =============================================================================== 2025-09-19 01:07:03.897763 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 240.13s 2025-09-19 01:07:03.897776 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2025-09-19 01:07:03.897788 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.76s 2025-09-19 01:07:03.897801 | orchestrator | 2025-09-19 01:07:03.897814 | orchestrator | 2025-09-19 01:07:03.897826 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:07:03.897839 | orchestrator | 2025-09-19 01:07:03.897853 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:07:03.897864 | orchestrator | Friday 19 September 2025 01:04:42 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-09-19 01:07:03.897902 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:07:03.897913 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:07:03.897925 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:07:03.897936 | orchestrator | 2025-09-19 01:07:03.897960 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:07:03.897972 | orchestrator | Friday 19 September 2025 01:04:42 +0000 (0:00:00.269) 0:00:00.528 ****** 2025-09-19 01:07:03.897983 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-19 01:07:03.897994 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-19 01:07:03.898004 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-19 01:07:03.898067 | orchestrator | 2025-09-19 01:07:03.898082 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-19 01:07:03.898104 | orchestrator | 2025-09-19 01:07:03.898115 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 01:07:03.898126 | orchestrator | Friday 19 September 2025 01:04:42 +0000 (0:00:00.356) 0:00:00.885 ****** 2025-09-19 01:07:03.898137 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:07:03.898148 | orchestrator | 2025-09-19 01:07:03.898159 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-19 01:07:03.898170 | orchestrator | Friday 19 September 2025 01:04:43 +0000 (0:00:00.456) 0:00:01.341 ****** 2025-09-19 01:07:03.898200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898239 | orchestrator | 2025-09-19 01:07:03.898250 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-19 01:07:03.898262 | orchestrator | Friday 19 September 2025 01:04:43 +0000 (0:00:00.781) 0:00:02.123 ****** 2025-09-19 01:07:03.898273 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-19 01:07:03.898284 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-19 01:07:03.898295 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 01:07:03.898306 | orchestrator | 2025-09-19 01:07:03.898317 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 01:07:03.898328 | orchestrator | Friday 19 September 2025 01:04:44 +0000 (0:00:00.733) 0:00:02.856 ****** 2025-09-19 01:07:03.898350 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:07:03.898362 | orchestrator | 2025-09-19 01:07:03.898373 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-19 01:07:03.898384 | orchestrator | Friday 19 September 2025 01:04:45 +0000 (0:00:00.596) 0:00:03.453 ****** 2025-09-19 01:07:03.898402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898457 | orchestrator | 2025-09-19 01:07:03.898468 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-19 01:07:03.898478 | orchestrator | Friday 19 September 2025 01:04:46 +0000 (0:00:01.495) 0:00:04.948 ****** 2025-09-19 01:07:03.898490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 01:07:03.898501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 01:07:03.898513 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.898524 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.898535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 01:07:03.898553 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.898564 | orchestrator | 2025-09-19 01:07:03.898579 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-19 01:07:03.898590 | orchestrator | Friday 19 September 2025 01:04:47 +0000 (0:00:00.369) 0:00:05.318 ****** 2025-09-19 01:07:03.898602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 01:07:03.898613 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.898651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 01:07:03.898664 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.898675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 01:07:03.898686 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.898697 | orchestrator | 2025-09-19 01:07:03.898708 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-19 01:07:03.898719 | orchestrator | Friday 19 September 2025 01:04:47 +0000 (0:00:00.763) 0:00:06.081 ****** 2025-09-19 01:07:03.898730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898779 | orchestrator | 2025-09-19 01:07:03.898790 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-19 01:07:03.898800 | orchestrator | Friday 19 September 2025 01:04:49 +0000 (0:00:01.252) 0:00:07.333 ****** 2025-09-19 01:07:03.898812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.898855 | orchestrator | 2025-09-19 01:07:03.898872 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-19 01:07:03.898887 | orchestrator | Friday 19 September 2025 01:04:50 +0000 (0:00:01.340) 0:00:08.674 ****** 2025-09-19 01:07:03.898905 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.898916 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.898926 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.898937 | orchestrator | 2025-09-19 01:07:03.898947 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-19 01:07:03.898958 | orchestrator | Friday 19 September 2025 01:04:51 +0000 (0:00:00.465) 0:00:09.139 ****** 2025-09-19 01:07:03.898969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 01:07:03.898980 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 01:07:03.898991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 01:07:03.899002 | orchestrator | 2025-09-19 01:07:03.899012 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-19 01:07:03.899023 | orchestrator | Friday 19 September 2025 01:04:52 +0000 (0:00:01.288) 0:00:10.428 ****** 2025-09-19 01:07:03.899034 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 01:07:03.899045 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 01:07:03.899056 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 01:07:03.899067 | orchestrator | 2025-09-19 01:07:03.899078 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-19 01:07:03.899088 | orchestrator | Friday 19 September 2025 01:04:53 +0000 (0:00:01.324) 0:00:11.752 ****** 2025-09-19 01:07:03.899099 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 01:07:03.899110 | orchestrator | 2025-09-19 01:07:03.899126 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-19 01:07:03.899137 | orchestrator | Friday 19 September 2025 01:04:54 +0000 (0:00:00.778) 0:00:12.531 ****** 2025-09-19 01:07:03.899147 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-19 01:07:03.899158 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-19 01:07:03.899169 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:07:03.899180 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:07:03.899190 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:07:03.899201 | orchestrator | 2025-09-19 01:07:03.899212 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-19 01:07:03.899223 | orchestrator | Friday 19 September 2025 01:04:55 +0000 (0:00:00.737) 0:00:13.268 ****** 2025-09-19 01:07:03.899233 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.899244 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.899255 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.899265 | orchestrator | 2025-09-19 01:07:03.899276 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-19 01:07:03.899287 | orchestrator | Friday 19 September 2025 01:04:55 +0000 (0:00:00.544) 0:00:13.812 ****** 2025-09-19 01:07:03.899315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1073776, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9943113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1073776, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9943113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1073776, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9943113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1073840, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.009382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1073840, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.009382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1073840, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.009382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1073789, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.997968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1073789, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.997968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1073789, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.997968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1073843, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0125484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1073843, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0125484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1073843, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0125484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1073806, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0025551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1073806, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0025551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1073806, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0025551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1073826, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0077178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1073826, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0077178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1073826, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0077178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1073772, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9924414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1073772, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9924414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1073772, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9924414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1073782, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9953609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1073782, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9953609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1073782, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9953609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1073794, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9995968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1073794, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9995968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1073794, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9995968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1073814, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0049093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1073814, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0049093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1073814, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0049093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1073837, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0090716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1073837, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0090716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1073837, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0090716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1073786, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9969764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1073786, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9969764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1073786, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240886.9969764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1073824, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.005903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1073824, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.005903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.899988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1073824, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.005903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1073810, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.002902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1073810, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.002902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1073810, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.002902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1073804, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0017571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1073804, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0017571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1073804, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0017571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1073802, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.000878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1073802, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.000878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1073802, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.000878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1073822, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.005903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1073822, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.005903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1073822, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.005903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1073797, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.000682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1073797, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.000682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1073797, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.000682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1073832, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0084112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1073832, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0084112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1073832, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0084112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1074080, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1021035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1074080, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1021035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1074080, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.1021035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1073872, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0247316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1073872, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0247316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1073872, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0247316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1073858, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.016597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1073858, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.016597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1073858, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.016597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1073905, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0329943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1073905, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0329943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1073905, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0329943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1073850, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0135953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1073850, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0135953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1073850, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0135953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1073964, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0629904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1073964, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0629904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1073964, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0629904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1073907, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0516357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1073907, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0516357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1073907, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0516357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1074006, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0636756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1074006, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0636756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1074006, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0636756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1074066, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0989382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1074066, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0989382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1074066, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0989382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1073962, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0532894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1073962, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0532894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1073962, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0532894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1073900, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0311189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1073900, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0311189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1073900, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0311189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1073865, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0204928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1073865, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0204928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1073865, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0204928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1073890, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0297315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1073890, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0297315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1073890, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0297315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1073863, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0177317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1073863, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0177317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1073863, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0177317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1073902, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0319529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.900939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1073902, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0319529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1073902, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0319529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1074014, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.096776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1074014, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.096776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1074014, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.096776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1074012, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0667315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1074012, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0667315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1074012, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0667315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1073853, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0141714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1073853, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0141714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1073853, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0141714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1073856, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1073856, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1073856, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1073957, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0530617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1073957, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0530617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1073957, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0530617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1074008, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0657876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1074008, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0657876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1074008, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758240887.0657876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 01:07:03.901298 | orchestrator | 2025-09-19 01:07:03.901309 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-19 01:07:03.901320 | orchestrator | Friday 19 September 2025 01:05:33 +0000 (0:00:37.670) 0:00:51.483 ****** 2025-09-19 01:07:03.901336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.901348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.901369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 01:07:03.901380 | orchestrator | 2025-09-19 01:07:03.901392 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-19 01:07:03.901402 | orchestrator | Friday 19 September 2025 01:05:34 +0000 (0:00:00.940) 0:00:52.423 ****** 2025-09-19 01:07:03.901414 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:07:03.901424 | orchestrator | 2025-09-19 01:07:03.901435 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-19 01:07:03.901446 | orchestrator | Friday 19 September 2025 01:05:36 +0000 (0:00:02.333) 0:00:54.756 ****** 2025-09-19 01:07:03.901457 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:07:03.901468 | orchestrator | 2025-09-19 01:07:03.901478 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 01:07:03.901489 | orchestrator | Friday 19 September 2025 01:05:38 +0000 (0:00:02.230) 0:00:56.987 ****** 2025-09-19 01:07:03.901500 | orchestrator | 2025-09-19 01:07:03.901511 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 01:07:03.901526 | orchestrator | Friday 19 September 2025 01:05:39 +0000 (0:00:00.255) 0:00:57.242 ****** 2025-09-19 01:07:03.901537 | orchestrator | 2025-09-19 01:07:03.901548 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 01:07:03.901558 | orchestrator | Friday 19 September 2025 01:05:39 +0000 (0:00:00.063) 0:00:57.306 ****** 2025-09-19 01:07:03.901569 | orchestrator | 2025-09-19 01:07:03.901580 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-19 01:07:03.901590 | orchestrator | Friday 19 September 2025 01:05:39 +0000 (0:00:00.068) 0:00:57.374 ****** 2025-09-19 01:07:03.901601 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.901612 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.901623 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:07:03.901695 | orchestrator | 2025-09-19 01:07:03.901707 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-19 01:07:03.901718 | orchestrator | Friday 19 September 2025 01:05:41 +0000 (0:00:01.892) 0:00:59.266 ****** 2025-09-19 01:07:03.901729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.901740 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.901751 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-19 01:07:03.901762 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-19 01:07:03.901773 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-19 01:07:03.901785 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:07:03.901796 | orchestrator | 2025-09-19 01:07:03.901807 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-19 01:07:03.901825 | orchestrator | Friday 19 September 2025 01:06:19 +0000 (0:00:38.669) 0:01:37.936 ****** 2025-09-19 01:07:03.901836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.901847 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:07:03.901858 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:07:03.901869 | orchestrator | 2025-09-19 01:07:03.901880 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-19 01:07:03.901897 | orchestrator | Friday 19 September 2025 01:06:56 +0000 (0:00:37.189) 0:02:15.125 ****** 2025-09-19 01:07:03.901909 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:07:03.901920 | orchestrator | 2025-09-19 01:07:03.901931 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-19 01:07:03.901942 | orchestrator | Friday 19 September 2025 01:06:59 +0000 (0:00:02.197) 0:02:17.323 ****** 2025-09-19 01:07:03.901953 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.901964 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:07:03.901975 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:07:03.901985 | orchestrator | 2025-09-19 01:07:03.901996 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-19 01:07:03.902008 | orchestrator | Friday 19 September 2025 01:06:59 +0000 (0:00:00.498) 0:02:17.822 ****** 2025-09-19 01:07:03.902072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-19 01:07:03.902086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-19 01:07:03.902098 | orchestrator | 2025-09-19 01:07:03.902109 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-19 01:07:03.902120 | orchestrator | Friday 19 September 2025 01:07:02 +0000 (0:00:02.352) 0:02:20.174 ****** 2025-09-19 01:07:03.902131 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:07:03.902141 | orchestrator | 2025-09-19 01:07:03.902152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:07:03.902164 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 01:07:03.902177 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 01:07:03.902188 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 01:07:03.902199 | orchestrator | 2025-09-19 01:07:03.902210 | orchestrator | 2025-09-19 01:07:03.902219 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:07:03.902229 | orchestrator | Friday 19 September 2025 01:07:02 +0000 (0:00:00.294) 0:02:20.469 ****** 2025-09-19 01:07:03.902238 | orchestrator | =============================================================================== 2025-09-19 01:07:03.902248 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.67s 2025-09-19 01:07:03.902258 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.67s 2025-09-19 01:07:03.902267 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.19s 2025-09-19 01:07:03.902277 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.35s 2025-09-19 01:07:03.902286 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.33s 2025-09-19 01:07:03.902302 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.23s 2025-09-19 01:07:03.902318 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.20s 2025-09-19 01:07:03.902328 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-09-19 01:07:03.902338 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2025-09-19 01:07:03.902348 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-09-19 01:07:03.902357 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2025-09-19 01:07:03.902367 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2025-09-19 01:07:03.902377 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2025-09-19 01:07:03.902386 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.94s 2025-09-19 01:07:03.902396 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.78s 2025-09-19 01:07:03.902406 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.78s 2025-09-19 01:07:03.902415 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.76s 2025-09-19 01:07:03.902425 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2025-09-19 01:07:03.902435 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.73s 2025-09-19 01:07:03.902444 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2025-09-19 01:07:03.902454 | orchestrator | 2025-09-19 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:06.926334 | orchestrator | 2025-09-19 01:07:06 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:06.926400 | orchestrator | 2025-09-19 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:09.953431 | orchestrator | 2025-09-19 01:07:09 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:09.953535 | orchestrator | 2025-09-19 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:12.995355 | orchestrator | 2025-09-19 01:07:12 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:12.995451 | orchestrator | 2025-09-19 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:16.040040 | orchestrator | 2025-09-19 01:07:16 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:16.040144 | orchestrator | 2025-09-19 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:19.082074 | orchestrator | 2025-09-19 01:07:19 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:19.082177 | orchestrator | 2025-09-19 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:22.123629 | orchestrator | 2025-09-19 01:07:22 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:22.123749 | orchestrator | 2025-09-19 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:25.166230 | orchestrator | 2025-09-19 01:07:25 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:25.166327 | orchestrator | 2025-09-19 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:28.205540 | orchestrator | 2025-09-19 01:07:28 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:28.205641 | orchestrator | 2025-09-19 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:31.254516 | orchestrator | 2025-09-19 01:07:31 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:31.254614 | orchestrator | 2025-09-19 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:34.300083 | orchestrator | 2025-09-19 01:07:34 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:34.300187 | orchestrator | 2025-09-19 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:37.344027 | orchestrator | 2025-09-19 01:07:37 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:37.344133 | orchestrator | 2025-09-19 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:40.401834 | orchestrator | 2025-09-19 01:07:40 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:40.401954 | orchestrator | 2025-09-19 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:43.438012 | orchestrator | 2025-09-19 01:07:43 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:43.438174 | orchestrator | 2025-09-19 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:46.477937 | orchestrator | 2025-09-19 01:07:46 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:46.478094 | orchestrator | 2025-09-19 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:49.513695 | orchestrator | 2025-09-19 01:07:49 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:49.514219 | orchestrator | 2025-09-19 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:52.563998 | orchestrator | 2025-09-19 01:07:52 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:52.564099 | orchestrator | 2025-09-19 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:55.604095 | orchestrator | 2025-09-19 01:07:55 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:55.604191 | orchestrator | 2025-09-19 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:07:58.642687 | orchestrator | 2025-09-19 01:07:58 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:07:58.642830 | orchestrator | 2025-09-19 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:01.689449 | orchestrator | 2025-09-19 01:08:01 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:01.689616 | orchestrator | 2025-09-19 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:04.736336 | orchestrator | 2025-09-19 01:08:04 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:04.736436 | orchestrator | 2025-09-19 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:07.774734 | orchestrator | 2025-09-19 01:08:07 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:07.775575 | orchestrator | 2025-09-19 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:10.815477 | orchestrator | 2025-09-19 01:08:10 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:10.815577 | orchestrator | 2025-09-19 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:13.859737 | orchestrator | 2025-09-19 01:08:13 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:13.859862 | orchestrator | 2025-09-19 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:16.904359 | orchestrator | 2025-09-19 01:08:16 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:16.904453 | orchestrator | 2025-09-19 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:19.946461 | orchestrator | 2025-09-19 01:08:19 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:19.946531 | orchestrator | 2025-09-19 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:23.008545 | orchestrator | 2025-09-19 01:08:23 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:23.008618 | orchestrator | 2025-09-19 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:26.055561 | orchestrator | 2025-09-19 01:08:26 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:26.055659 | orchestrator | 2025-09-19 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:29.101156 | orchestrator | 2025-09-19 01:08:29 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:29.101278 | orchestrator | 2025-09-19 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:32.158747 | orchestrator | 2025-09-19 01:08:32 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:32.158950 | orchestrator | 2025-09-19 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:35.203527 | orchestrator | 2025-09-19 01:08:35 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:35.203622 | orchestrator | 2025-09-19 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:38.246619 | orchestrator | 2025-09-19 01:08:38 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:38.246725 | orchestrator | 2025-09-19 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:41.292781 | orchestrator | 2025-09-19 01:08:41 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:41.292937 | orchestrator | 2025-09-19 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:44.335466 | orchestrator | 2025-09-19 01:08:44 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:44.335591 | orchestrator | 2025-09-19 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:47.377613 | orchestrator | 2025-09-19 01:08:47 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:47.377683 | orchestrator | 2025-09-19 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:50.429679 | orchestrator | 2025-09-19 01:08:50 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:50.429768 | orchestrator | 2025-09-19 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:53.471708 | orchestrator | 2025-09-19 01:08:53 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:53.471841 | orchestrator | 2025-09-19 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:56.518733 | orchestrator | 2025-09-19 01:08:56 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:56.518812 | orchestrator | 2025-09-19 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:08:59.565938 | orchestrator | 2025-09-19 01:08:59 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:08:59.566109 | orchestrator | 2025-09-19 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:02.611700 | orchestrator | 2025-09-19 01:09:02 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:02.611797 | orchestrator | 2025-09-19 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:05.656240 | orchestrator | 2025-09-19 01:09:05 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:05.656326 | orchestrator | 2025-09-19 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:08.704338 | orchestrator | 2025-09-19 01:09:08 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:08.704576 | orchestrator | 2025-09-19 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:11.751144 | orchestrator | 2025-09-19 01:09:11 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:11.751255 | orchestrator | 2025-09-19 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:14.799454 | orchestrator | 2025-09-19 01:09:14 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:14.799553 | orchestrator | 2025-09-19 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:17.838291 | orchestrator | 2025-09-19 01:09:17 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:17.838385 | orchestrator | 2025-09-19 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:20.881585 | orchestrator | 2025-09-19 01:09:20 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:20.881681 | orchestrator | 2025-09-19 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:23.922221 | orchestrator | 2025-09-19 01:09:23 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:23.922323 | orchestrator | 2025-09-19 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:26.973713 | orchestrator | 2025-09-19 01:09:26 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:26.973809 | orchestrator | 2025-09-19 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:30.035232 | orchestrator | 2025-09-19 01:09:30 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:30.035334 | orchestrator | 2025-09-19 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:33.083080 | orchestrator | 2025-09-19 01:09:33 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:33.083175 | orchestrator | 2025-09-19 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:36.128720 | orchestrator | 2025-09-19 01:09:36 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:36.128819 | orchestrator | 2025-09-19 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:39.176006 | orchestrator | 2025-09-19 01:09:39 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:39.176103 | orchestrator | 2025-09-19 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:42.231537 | orchestrator | 2025-09-19 01:09:42 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:42.231634 | orchestrator | 2025-09-19 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:45.285088 | orchestrator | 2025-09-19 01:09:45 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:45.285163 | orchestrator | 2025-09-19 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:48.331639 | orchestrator | 2025-09-19 01:09:48 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:48.332328 | orchestrator | 2025-09-19 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:51.379671 | orchestrator | 2025-09-19 01:09:51 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:51.379770 | orchestrator | 2025-09-19 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:54.428407 | orchestrator | 2025-09-19 01:09:54 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:54.428507 | orchestrator | 2025-09-19 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:09:57.478758 | orchestrator | 2025-09-19 01:09:57 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:09:57.478851 | orchestrator | 2025-09-19 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:00.513245 | orchestrator | 2025-09-19 01:10:00 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:00.513343 | orchestrator | 2025-09-19 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:03.552396 | orchestrator | 2025-09-19 01:10:03 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:03.552497 | orchestrator | 2025-09-19 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:06.598185 | orchestrator | 2025-09-19 01:10:06 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:06.598282 | orchestrator | 2025-09-19 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:09.635822 | orchestrator | 2025-09-19 01:10:09 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:09.635972 | orchestrator | 2025-09-19 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:12.670269 | orchestrator | 2025-09-19 01:10:12 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:12.670353 | orchestrator | 2025-09-19 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:15.695386 | orchestrator | 2025-09-19 01:10:15 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:15.695474 | orchestrator | 2025-09-19 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:18.733824 | orchestrator | 2025-09-19 01:10:18 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:18.733953 | orchestrator | 2025-09-19 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:21.771229 | orchestrator | 2025-09-19 01:10:21 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:21.771323 | orchestrator | 2025-09-19 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:24.810753 | orchestrator | 2025-09-19 01:10:24 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:24.810828 | orchestrator | 2025-09-19 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:27.857959 | orchestrator | 2025-09-19 01:10:27 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:27.858126 | orchestrator | 2025-09-19 01:10:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:30.904711 | orchestrator | 2025-09-19 01:10:30 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:30.904809 | orchestrator | 2025-09-19 01:10:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:33.954531 | orchestrator | 2025-09-19 01:10:33 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:33.954744 | orchestrator | 2025-09-19 01:10:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:36.995371 | orchestrator | 2025-09-19 01:10:36 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:36.995484 | orchestrator | 2025-09-19 01:10:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:40.034240 | orchestrator | 2025-09-19 01:10:40 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:40.034344 | orchestrator | 2025-09-19 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:43.081377 | orchestrator | 2025-09-19 01:10:43 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:43.081478 | orchestrator | 2025-09-19 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:46.121187 | orchestrator | 2025-09-19 01:10:46 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:46.121296 | orchestrator | 2025-09-19 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:49.167395 | orchestrator | 2025-09-19 01:10:49 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:49.167507 | orchestrator | 2025-09-19 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:52.212053 | orchestrator | 2025-09-19 01:10:52 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state STARTED 2025-09-19 01:10:52.212122 | orchestrator | 2025-09-19 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 01:10:55.256406 | orchestrator | 2025-09-19 01:10:55.256472 | orchestrator | 2025-09-19 01:10:55 | INFO  | Task c6b6f9fd-b5dc-4d2f-ad40-92bdac6d700d is in state SUCCESS 2025-09-19 01:10:55.256961 | orchestrator | 2025-09-19 01:10:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:10:55.258184 | orchestrator | 2025-09-19 01:10:55.258210 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:10:55.258215 | orchestrator | 2025-09-19 01:10:55.258219 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-19 01:10:55.258224 | orchestrator | Friday 19 September 2025 01:02:40 +0000 (0:00:00.589) 0:00:00.589 ****** 2025-09-19 01:10:55.258228 | orchestrator | changed: [testbed-manager] 2025-09-19 01:10:55.258234 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258238 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.258241 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.258245 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.258249 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.258253 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.258257 | orchestrator | 2025-09-19 01:10:55.258260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:10:55.258264 | orchestrator | Friday 19 September 2025 01:02:40 +0000 (0:00:00.798) 0:00:01.387 ****** 2025-09-19 01:10:55.258268 | orchestrator | changed: [testbed-manager] 2025-09-19 01:10:55.258272 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258276 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.258286 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.258290 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.258294 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.258298 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.258341 | orchestrator | 2025-09-19 01:10:55.258346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:10:55.258398 | orchestrator | Friday 19 September 2025 01:02:41 +0000 (0:00:00.595) 0:00:01.982 ****** 2025-09-19 01:10:55.258419 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-19 01:10:55.258449 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 01:10:55.258453 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 01:10:55.258473 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 01:10:55.258477 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-19 01:10:55.258481 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-19 01:10:55.258485 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-19 01:10:55.258488 | orchestrator | 2025-09-19 01:10:55.258493 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-19 01:10:55.258497 | orchestrator | 2025-09-19 01:10:55.258501 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 01:10:55.258505 | orchestrator | Friday 19 September 2025 01:02:42 +0000 (0:00:01.121) 0:00:03.104 ****** 2025-09-19 01:10:55.258508 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.258512 | orchestrator | 2025-09-19 01:10:55.258516 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-19 01:10:55.258664 | orchestrator | Friday 19 September 2025 01:02:45 +0000 (0:00:02.347) 0:00:05.452 ****** 2025-09-19 01:10:55.258670 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-19 01:10:55.258674 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-19 01:10:55.258678 | orchestrator | 2025-09-19 01:10:55.258682 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-19 01:10:55.258686 | orchestrator | Friday 19 September 2025 01:02:49 +0000 (0:00:04.475) 0:00:09.927 ****** 2025-09-19 01:10:55.258690 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 01:10:55.258694 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 01:10:55.258697 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258701 | orchestrator | 2025-09-19 01:10:55.258705 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 01:10:55.258709 | orchestrator | Friday 19 September 2025 01:02:53 +0000 (0:00:04.323) 0:00:14.251 ****** 2025-09-19 01:10:55.258712 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258721 | orchestrator | 2025-09-19 01:10:55.258725 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-19 01:10:55.258739 | orchestrator | Friday 19 September 2025 01:02:54 +0000 (0:00:00.705) 0:00:14.956 ****** 2025-09-19 01:10:55.258743 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258747 | orchestrator | 2025-09-19 01:10:55.258750 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-19 01:10:55.258754 | orchestrator | Friday 19 September 2025 01:02:56 +0000 (0:00:01.489) 0:00:16.446 ****** 2025-09-19 01:10:55.258758 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258762 | orchestrator | 2025-09-19 01:10:55.258765 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 01:10:55.258769 | orchestrator | Friday 19 September 2025 01:02:58 +0000 (0:00:02.306) 0:00:18.753 ****** 2025-09-19 01:10:55.258773 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.258791 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.258796 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.258800 | orchestrator | 2025-09-19 01:10:55.258811 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 01:10:55.258815 | orchestrator | Friday 19 September 2025 01:02:58 +0000 (0:00:00.264) 0:00:19.017 ****** 2025-09-19 01:10:55.258819 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.258823 | orchestrator | 2025-09-19 01:10:55.258828 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-19 01:10:55.258832 | orchestrator | Friday 19 September 2025 01:03:33 +0000 (0:00:35.279) 0:00:54.297 ****** 2025-09-19 01:10:55.258836 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.258840 | orchestrator | 2025-09-19 01:10:55.258916 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 01:10:55.258999 | orchestrator | Friday 19 September 2025 01:03:48 +0000 (0:00:14.860) 0:01:09.158 ****** 2025-09-19 01:10:55.259010 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.259014 | orchestrator | 2025-09-19 01:10:55.259018 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 01:10:55.259028 | orchestrator | Friday 19 September 2025 01:03:59 +0000 (0:00:10.934) 0:01:20.092 ****** 2025-09-19 01:10:55.259041 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.259046 | orchestrator | 2025-09-19 01:10:55.259050 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-19 01:10:55.259054 | orchestrator | Friday 19 September 2025 01:04:00 +0000 (0:00:01.042) 0:01:21.134 ****** 2025-09-19 01:10:55.259059 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.259063 | orchestrator | 2025-09-19 01:10:55.259067 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 01:10:55.259071 | orchestrator | Friday 19 September 2025 01:04:01 +0000 (0:00:00.447) 0:01:21.582 ****** 2025-09-19 01:10:55.259076 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.259080 | orchestrator | 2025-09-19 01:10:55.259084 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 01:10:55.259089 | orchestrator | Friday 19 September 2025 01:04:01 +0000 (0:00:00.515) 0:01:22.098 ****** 2025-09-19 01:10:55.259093 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.259194 | orchestrator | 2025-09-19 01:10:55.259357 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 01:10:55.259361 | orchestrator | Friday 19 September 2025 01:04:19 +0000 (0:00:17.381) 0:01:39.480 ****** 2025-09-19 01:10:55.259365 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.259369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259372 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259376 | orchestrator | 2025-09-19 01:10:55.259380 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-19 01:10:55.259384 | orchestrator | 2025-09-19 01:10:55.259388 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 01:10:55.259392 | orchestrator | Friday 19 September 2025 01:04:19 +0000 (0:00:00.328) 0:01:39.808 ****** 2025-09-19 01:10:55.259395 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.259399 | orchestrator | 2025-09-19 01:10:55.259403 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-19 01:10:55.259407 | orchestrator | Friday 19 September 2025 01:04:19 +0000 (0:00:00.545) 0:01:40.353 ****** 2025-09-19 01:10:55.259410 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259414 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259418 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.259421 | orchestrator | 2025-09-19 01:10:55.259425 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-19 01:10:55.259429 | orchestrator | Friday 19 September 2025 01:04:22 +0000 (0:00:02.360) 0:01:42.713 ****** 2025-09-19 01:10:55.259432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259436 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259440 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.259444 | orchestrator | 2025-09-19 01:10:55.259447 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 01:10:55.259465 | orchestrator | Friday 19 September 2025 01:04:24 +0000 (0:00:02.215) 0:01:44.929 ****** 2025-09-19 01:10:55.259486 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.259490 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259497 | orchestrator | 2025-09-19 01:10:55.259501 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 01:10:55.259505 | orchestrator | Friday 19 September 2025 01:04:24 +0000 (0:00:00.336) 0:01:45.265 ****** 2025-09-19 01:10:55.259509 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 01:10:55.259512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259521 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 01:10:55.259524 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259529 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 01:10:55.259533 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-19 01:10:55.259536 | orchestrator | 2025-09-19 01:10:55.259540 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 01:10:55.259544 | orchestrator | Friday 19 September 2025 01:04:32 +0000 (0:00:07.132) 0:01:52.398 ****** 2025-09-19 01:10:55.259553 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.259556 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259560 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259564 | orchestrator | 2025-09-19 01:10:55.259568 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 01:10:55.259571 | orchestrator | Friday 19 September 2025 01:04:32 +0000 (0:00:00.466) 0:01:52.864 ****** 2025-09-19 01:10:55.259575 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 01:10:55.259579 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.259583 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 01:10:55.259586 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259590 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 01:10:55.259594 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259597 | orchestrator | 2025-09-19 01:10:55.259601 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 01:10:55.259605 | orchestrator | Friday 19 September 2025 01:04:33 +0000 (0:00:00.747) 0:01:53.612 ****** 2025-09-19 01:10:55.259609 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259612 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.259616 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259620 | orchestrator | 2025-09-19 01:10:55.259623 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-19 01:10:55.259627 | orchestrator | Friday 19 September 2025 01:04:33 +0000 (0:00:00.579) 0:01:54.191 ****** 2025-09-19 01:10:55.259631 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259640 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259661 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.259664 | orchestrator | 2025-09-19 01:10:55.259668 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-19 01:10:55.259672 | orchestrator | Friday 19 September 2025 01:04:34 +0000 (0:00:01.082) 0:01:55.274 ****** 2025-09-19 01:10:55.259676 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259694 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.259698 | orchestrator | 2025-09-19 01:10:55.259702 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-19 01:10:55.259706 | orchestrator | Friday 19 September 2025 01:04:37 +0000 (0:00:02.140) 0:01:57.414 ****** 2025-09-19 01:10:55.259710 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259713 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259717 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.259721 | orchestrator | 2025-09-19 01:10:55.259725 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 01:10:55.259922 | orchestrator | Friday 19 September 2025 01:04:59 +0000 (0:00:22.319) 0:02:19.734 ****** 2025-09-19 01:10:55.259927 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.259931 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.259950 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.259954 | orchestrator | 2025-09-19 01:10:55.259958 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 01:10:55.259962 | orchestrator | Friday 19 September 2025 01:05:12 +0000 (0:00:12.843) 0:02:32.577 ****** 2025-09-19 01:10:55.259966 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.259969 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260001 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260005 | orchestrator | 2025-09-19 01:10:55.260009 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-19 01:10:55.260013 | orchestrator | Friday 19 September 2025 01:05:12 +0000 (0:00:00.730) 0:02:33.308 ****** 2025-09-19 01:10:55.260016 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260020 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260024 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.260027 | orchestrator | 2025-09-19 01:10:55.260031 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-19 01:10:55.260035 | orchestrator | Friday 19 September 2025 01:05:25 +0000 (0:00:12.479) 0:02:45.788 ****** 2025-09-19 01:10:55.260039 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260042 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260046 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260050 | orchestrator | 2025-09-19 01:10:55.260053 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 01:10:55.260057 | orchestrator | Friday 19 September 2025 01:05:26 +0000 (0:00:01.434) 0:02:47.222 ****** 2025-09-19 01:10:55.260061 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260065 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260068 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260072 | orchestrator | 2025-09-19 01:10:55.260076 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-19 01:10:55.260080 | orchestrator | 2025-09-19 01:10:55.260083 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 01:10:55.260087 | orchestrator | Friday 19 September 2025 01:05:27 +0000 (0:00:00.311) 0:02:47.534 ****** 2025-09-19 01:10:55.260091 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.260095 | orchestrator | 2025-09-19 01:10:55.260099 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-19 01:10:55.260103 | orchestrator | Friday 19 September 2025 01:05:27 +0000 (0:00:00.548) 0:02:48.082 ****** 2025-09-19 01:10:55.260107 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-19 01:10:55.260111 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-19 01:10:55.260114 | orchestrator | 2025-09-19 01:10:55.260118 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-19 01:10:55.260122 | orchestrator | Friday 19 September 2025 01:05:30 +0000 (0:00:03.265) 0:02:51.347 ****** 2025-09-19 01:10:55.260126 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-19 01:10:55.260131 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-19 01:10:55.260139 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-19 01:10:55.260143 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-19 01:10:55.260147 | orchestrator | 2025-09-19 01:10:55.260151 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-19 01:10:55.260154 | orchestrator | Friday 19 September 2025 01:05:37 +0000 (0:00:06.628) 0:02:57.976 ****** 2025-09-19 01:10:55.260158 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 01:10:55.260162 | orchestrator | 2025-09-19 01:10:55.260166 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-19 01:10:55.260169 | orchestrator | Friday 19 September 2025 01:05:40 +0000 (0:00:03.366) 0:03:01.342 ****** 2025-09-19 01:10:55.260173 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 01:10:55.260177 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-19 01:10:55.260180 | orchestrator | 2025-09-19 01:10:55.260184 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-19 01:10:55.260191 | orchestrator | Friday 19 September 2025 01:05:45 +0000 (0:00:04.321) 0:03:05.664 ****** 2025-09-19 01:10:55.260195 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 01:10:55.260199 | orchestrator | 2025-09-19 01:10:55.260203 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-19 01:10:55.260207 | orchestrator | Friday 19 September 2025 01:05:48 +0000 (0:00:03.352) 0:03:09.017 ****** 2025-09-19 01:10:55.260210 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-19 01:10:55.260214 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-19 01:10:55.260217 | orchestrator | 2025-09-19 01:10:55.260221 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 01:10:55.260230 | orchestrator | Friday 19 September 2025 01:05:56 +0000 (0:00:07.638) 0:03:16.656 ****** 2025-09-19 01:10:55.260237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.260243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.260251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.260263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.260269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.260273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.260277 | orchestrator | 2025-09-19 01:10:55.260281 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-19 01:10:55.260284 | orchestrator | Friday 19 September 2025 01:05:57 +0000 (0:00:01.320) 0:03:17.977 ****** 2025-09-19 01:10:55.260288 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260292 | orchestrator | 2025-09-19 01:10:55.260295 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-19 01:10:55.260299 | orchestrator | Friday 19 September 2025 01:05:57 +0000 (0:00:00.142) 0:03:18.120 ****** 2025-09-19 01:10:55.260303 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260307 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260314 | orchestrator | 2025-09-19 01:10:55.260318 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-19 01:10:55.260321 | orchestrator | Friday 19 September 2025 01:05:58 +0000 (0:00:00.524) 0:03:18.644 ****** 2025-09-19 01:10:55.260325 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 01:10:55.260329 | orchestrator | 2025-09-19 01:10:55.260333 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-19 01:10:55.260336 | orchestrator | Friday 19 September 2025 01:05:58 +0000 (0:00:00.696) 0:03:19.341 ****** 2025-09-19 01:10:55.260340 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260344 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260347 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260354 | orchestrator | 2025-09-19 01:10:55.260358 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 01:10:55.260362 | orchestrator | Friday 19 September 2025 01:05:59 +0000 (0:00:00.308) 0:03:19.650 ****** 2025-09-19 01:10:55.260365 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.260369 | orchestrator | 2025-09-19 01:10:55.260375 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 01:10:55.260379 | orchestrator | Friday 19 September 2025 01:05:59 +0000 (0:00:00.524) 0:03:20.174 ****** 2025-09-19 01:10:55.260386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.260390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.260395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.260404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.260408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.260417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.260421 | orchestrator | 2025-09-19 01:10:55.260424 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 01:10:55.260428 | orchestrator | Friday 19 September 2025 01:06:02 +0000 (0:00:02.590) 0:03:22.765 ****** 2025-09-19 01:10:55.260432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.260437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.260447 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.260461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.260465 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.260473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.260482 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260486 | orchestrator | 2025-09-19 01:10:55.260490 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 01:10:55.260494 | orchestrator | Friday 19 September 2025 01:06:02 +0000 (0:00:00.606) 0:03:23.372 ****** 2025-09-19 01:10:55.260500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.260505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.260509 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.260914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.260927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.260976 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.260986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.260990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.260994 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.260998 | orchestrator | 2025-09-19 01:10:55.261002 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-19 01:10:55.261006 | orchestrator | Friday 19 September 2025 01:06:03 +0000 (0:00:00.763) 0:03:24.135 ****** 2025-09-19 01:10:55.261015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261052 | orchestrator | 2025-09-19 01:10:55.261056 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-19 01:10:55.261060 | orchestrator | Friday 19 September 2025 01:06:06 +0000 (0:00:02.603) 0:03:26.739 ****** 2025-09-19 01:10:55.261064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261099 | orchestrator | 2025-09-19 01:10:55.261103 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-19 01:10:55.261106 | orchestrator | Friday 19 September 2025 01:06:11 +0000 (0:00:05.578) 0:03:32.317 ****** 2025-09-19 01:10:55.261113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.261117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.261124 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.261132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.261136 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 01:10:55.261151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.261155 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261158 | orchestrator | 2025-09-19 01:10:55.261162 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-19 01:10:55.261170 | orchestrator | Friday 19 September 2025 01:06:12 +0000 (0:00:00.626) 0:03:32.943 ****** 2025-09-19 01:10:55.261174 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.261178 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.261181 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.261185 | orchestrator | 2025-09-19 01:10:55.261189 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-19 01:10:55.261193 | orchestrator | Friday 19 September 2025 01:06:14 +0000 (0:00:01.761) 0:03:34.706 ****** 2025-09-19 01:10:55.261196 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261200 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261204 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261207 | orchestrator | 2025-09-19 01:10:55.261211 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-19 01:10:55.261215 | orchestrator | Friday 19 September 2025 01:06:14 +0000 (0:00:00.565) 0:03:35.271 ****** 2025-09-19 01:10:55.261219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 01:10:55.261297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261310 | orchestrator | 2025-09-19 01:10:55.261314 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 01:10:55.261317 | orchestrator | Friday 19 September 2025 01:06:16 +0000 (0:00:01.943) 0:03:37.214 ****** 2025-09-19 01:10:55.261321 | orchestrator | 2025-09-19 01:10:55.261327 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 01:10:55.261331 | orchestrator | Friday 19 September 2025 01:06:16 +0000 (0:00:00.138) 0:03:37.352 ****** 2025-09-19 01:10:55.261335 | orchestrator | 2025-09-19 01:10:55.261339 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 01:10:55.261342 | orchestrator | Friday 19 September 2025 01:06:17 +0000 (0:00:00.128) 0:03:37.481 ****** 2025-09-19 01:10:55.261346 | orchestrator | 2025-09-19 01:10:55.261350 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-19 01:10:55.261353 | orchestrator | Friday 19 September 2025 01:06:17 +0000 (0:00:00.140) 0:03:37.621 ****** 2025-09-19 01:10:55.261357 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.261361 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.261365 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.261368 | orchestrator | 2025-09-19 01:10:55.261372 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-19 01:10:55.261376 | orchestrator | Friday 19 September 2025 01:06:35 +0000 (0:00:18.006) 0:03:55.628 ****** 2025-09-19 01:10:55.261383 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.261386 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.261390 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.261394 | orchestrator | 2025-09-19 01:10:55.261397 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-19 01:10:55.261401 | orchestrator | 2025-09-19 01:10:55.261405 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 01:10:55.261409 | orchestrator | Friday 19 September 2025 01:06:46 +0000 (0:00:11.077) 0:04:06.705 ****** 2025-09-19 01:10:55.261413 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.261418 | orchestrator | 2025-09-19 01:10:55.261424 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 01:10:55.261428 | orchestrator | Friday 19 September 2025 01:06:47 +0000 (0:00:01.219) 0:04:07.925 ****** 2025-09-19 01:10:55.261432 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.261436 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.261439 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.261443 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261447 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261450 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261454 | orchestrator | 2025-09-19 01:10:55.261458 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-19 01:10:55.261461 | orchestrator | Friday 19 September 2025 01:06:48 +0000 (0:00:00.729) 0:04:08.654 ****** 2025-09-19 01:10:55.261465 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261469 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261473 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261476 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:10:55.261480 | orchestrator | 2025-09-19 01:10:55.261484 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 01:10:55.261487 | orchestrator | Friday 19 September 2025 01:06:49 +0000 (0:00:00.821) 0:04:09.476 ****** 2025-09-19 01:10:55.261491 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-19 01:10:55.261495 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-19 01:10:55.261499 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-19 01:10:55.261503 | orchestrator | 2025-09-19 01:10:55.261506 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 01:10:55.261510 | orchestrator | Friday 19 September 2025 01:06:49 +0000 (0:00:00.862) 0:04:10.339 ****** 2025-09-19 01:10:55.261514 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-19 01:10:55.261518 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-19 01:10:55.261521 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-19 01:10:55.261525 | orchestrator | 2025-09-19 01:10:55.261529 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 01:10:55.261533 | orchestrator | Friday 19 September 2025 01:06:51 +0000 (0:00:01.221) 0:04:11.560 ****** 2025-09-19 01:10:55.261538 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-19 01:10:55.261542 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.261546 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-19 01:10:55.261550 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.261554 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-19 01:10:55.261559 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.261563 | orchestrator | 2025-09-19 01:10:55.261567 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-19 01:10:55.261571 | orchestrator | Friday 19 September 2025 01:06:51 +0000 (0:00:00.510) 0:04:12.070 ****** 2025-09-19 01:10:55.261576 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 01:10:55.261585 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 01:10:55.261590 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 01:10:55.261594 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 01:10:55.261598 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261602 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 01:10:55.261607 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 01:10:55.261611 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261615 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 01:10:55.261619 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 01:10:55.261624 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261631 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 01:10:55.261635 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 01:10:55.261639 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 01:10:55.261643 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 01:10:55.261648 | orchestrator | 2025-09-19 01:10:55.261652 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-19 01:10:55.261656 | orchestrator | Friday 19 September 2025 01:06:53 +0000 (0:00:02.049) 0:04:14.120 ****** 2025-09-19 01:10:55.261661 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261665 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261669 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261673 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.261678 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.261682 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.261686 | orchestrator | 2025-09-19 01:10:55.261690 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-19 01:10:55.261694 | orchestrator | Friday 19 September 2025 01:06:54 +0000 (0:00:01.187) 0:04:15.307 ****** 2025-09-19 01:10:55.261699 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.261703 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.261707 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.261712 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.261716 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.261720 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.261724 | orchestrator | 2025-09-19 01:10:55.261729 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 01:10:55.261733 | orchestrator | Friday 19 September 2025 01:06:56 +0000 (0:00:01.905) 0:04:17.212 ****** 2025-09-19 01:10:55.261740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261834 | orchestrator | 2025-09-19 01:10:55.261838 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 01:10:55.261843 | orchestrator | Friday 19 September 2025 01:06:59 +0000 (0:00:02.289) 0:04:19.501 ****** 2025-09-19 01:10:55.261847 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:10:55.261852 | orchestrator | 2025-09-19 01:10:55.261856 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 01:10:55.261860 | orchestrator | Friday 19 September 2025 01:07:00 +0000 (0:00:01.279) 0:04:20.781 ****** 2025-09-19 01:10:55.261867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.261974 | orchestrator | 2025-09-19 01:10:55.261978 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 01:10:55.261982 | orchestrator | Friday 19 September 2025 01:07:04 +0000 (0:00:03.681) 0:04:24.463 ****** 2025-09-19 01:10:55.261985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.261992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.261996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262036 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262045 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262067 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.262079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262083 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.262087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.262091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.262105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262113 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.262117 | orchestrator | 2025-09-19 01:10:55.262121 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 01:10:55.262124 | orchestrator | Friday 19 September 2025 01:07:05 +0000 (0:00:01.523) 0:04:25.986 ****** 2025-09-19 01:10:55.262163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262176 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262200 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262216 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.262229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.262239 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.262243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262247 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.262255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.262258 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.262262 | orchestrator | 2025-09-19 01:10:55.262266 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 01:10:55.262270 | orchestrator | Friday 19 September 2025 01:07:07 +0000 (0:00:02.093) 0:04:28.079 ****** 2025-09-19 01:10:55.262274 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.262281 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262284 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.262288 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 01:10:55.262292 | orchestrator | 2025-09-19 01:10:55.262298 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-19 01:10:55.262302 | orchestrator | Friday 19 September 2025 01:07:08 +0000 (0:00:01.094) 0:04:29.174 ****** 2025-09-19 01:10:55.262306 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 01:10:55.262310 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 01:10:55.262314 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 01:10:55.262317 | orchestrator | 2025-09-19 01:10:55.262321 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-19 01:10:55.262325 | orchestrator | Friday 19 September 2025 01:07:09 +0000 (0:00:00.959) 0:04:30.133 ****** 2025-09-19 01:10:55.262329 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 01:10:55.262332 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 01:10:55.262336 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 01:10:55.262340 | orchestrator | 2025-09-19 01:10:55.262344 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-19 01:10:55.262347 | orchestrator | Friday 19 September 2025 01:07:10 +0000 (0:00:01.117) 0:04:31.251 ****** 2025-09-19 01:10:55.262351 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:10:55.262355 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:10:55.262359 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:10:55.262362 | orchestrator | 2025-09-19 01:10:55.262366 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-19 01:10:55.262370 | orchestrator | Friday 19 September 2025 01:07:11 +0000 (0:00:00.512) 0:04:31.763 ****** 2025-09-19 01:10:55.262374 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:10:55.262377 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:10:55.262381 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:10:55.262385 | orchestrator | 2025-09-19 01:10:55.262389 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-19 01:10:55.262392 | orchestrator | Friday 19 September 2025 01:07:11 +0000 (0:00:00.502) 0:04:32.266 ****** 2025-09-19 01:10:55.262396 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 01:10:55.262402 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 01:10:55.262406 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 01:10:55.262409 | orchestrator | 2025-09-19 01:10:55.262413 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-19 01:10:55.262417 | orchestrator | Friday 19 September 2025 01:07:13 +0000 (0:00:01.183) 0:04:33.450 ****** 2025-09-19 01:10:55.262421 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 01:10:55.262424 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 01:10:55.262428 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 01:10:55.262432 | orchestrator | 2025-09-19 01:10:55.262436 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-19 01:10:55.262439 | orchestrator | Friday 19 September 2025 01:07:14 +0000 (0:00:01.405) 0:04:34.856 ****** 2025-09-19 01:10:55.262443 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 01:10:55.262447 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 01:10:55.262451 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 01:10:55.262454 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-19 01:10:55.262458 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-19 01:10:55.262462 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-19 01:10:55.262465 | orchestrator | 2025-09-19 01:10:55.262469 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-19 01:10:55.262473 | orchestrator | Friday 19 September 2025 01:07:18 +0000 (0:00:03.777) 0:04:38.633 ****** 2025-09-19 01:10:55.262479 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262483 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262487 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262491 | orchestrator | 2025-09-19 01:10:55.262494 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-19 01:10:55.262498 | orchestrator | Friday 19 September 2025 01:07:18 +0000 (0:00:00.315) 0:04:38.949 ****** 2025-09-19 01:10:55.262502 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262505 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262509 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262513 | orchestrator | 2025-09-19 01:10:55.262516 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-19 01:10:55.262520 | orchestrator | Friday 19 September 2025 01:07:18 +0000 (0:00:00.297) 0:04:39.247 ****** 2025-09-19 01:10:55.262524 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.262528 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.262531 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.262535 | orchestrator | 2025-09-19 01:10:55.262539 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-19 01:10:55.262543 | orchestrator | Friday 19 September 2025 01:07:20 +0000 (0:00:01.894) 0:04:41.142 ****** 2025-09-19 01:10:55.262547 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 01:10:55.262551 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 01:10:55.262555 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 01:10:55.262559 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 01:10:55.262562 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 01:10:55.262570 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 01:10:55.262574 | orchestrator | 2025-09-19 01:10:55.262578 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-19 01:10:55.262581 | orchestrator | Friday 19 September 2025 01:07:23 +0000 (0:00:03.108) 0:04:44.250 ****** 2025-09-19 01:10:55.262585 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 01:10:55.262589 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 01:10:55.262593 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 01:10:55.262596 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 01:10:55.262600 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.262604 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 01:10:55.262607 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.262611 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 01:10:55.262615 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.262618 | orchestrator | 2025-09-19 01:10:55.262622 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-19 01:10:55.262626 | orchestrator | Friday 19 September 2025 01:07:27 +0000 (0:00:03.360) 0:04:47.611 ****** 2025-09-19 01:10:55.262630 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262633 | orchestrator | 2025-09-19 01:10:55.262637 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-19 01:10:55.262641 | orchestrator | Friday 19 September 2025 01:07:27 +0000 (0:00:00.124) 0:04:47.736 ****** 2025-09-19 01:10:55.262645 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262648 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262655 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262659 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.262662 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262666 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.262670 | orchestrator | 2025-09-19 01:10:55.262674 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-19 01:10:55.262679 | orchestrator | Friday 19 September 2025 01:07:28 +0000 (0:00:00.740) 0:04:48.477 ****** 2025-09-19 01:10:55.262683 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 01:10:55.262687 | orchestrator | 2025-09-19 01:10:55.262690 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-19 01:10:55.262694 | orchestrator | Friday 19 September 2025 01:07:28 +0000 (0:00:00.722) 0:04:49.199 ****** 2025-09-19 01:10:55.262698 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262702 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262705 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262709 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.262713 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262716 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.262720 | orchestrator | 2025-09-19 01:10:55.262724 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-19 01:10:55.262727 | orchestrator | Friday 19 September 2025 01:07:29 +0000 (0:00:00.619) 0:04:49.819 ****** 2025-09-19 01:10:55.262731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262814 | orchestrator | 2025-09-19 01:10:55.262818 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-19 01:10:55.262821 | orchestrator | Friday 19 September 2025 01:07:33 +0000 (0:00:03.955) 0:04:53.774 ****** 2025-09-19 01:10:55.262825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.262853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.262858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.262901 | orchestrator | 2025-09-19 01:10:55.262905 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-19 01:10:55.262909 | orchestrator | Friday 19 September 2025 01:07:39 +0000 (0:00:06.549) 0:05:00.324 ****** 2025-09-19 01:10:55.262912 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.262916 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.262920 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.262924 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.262927 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262945 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.262949 | orchestrator | 2025-09-19 01:10:55.262953 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-19 01:10:55.262957 | orchestrator | Friday 19 September 2025 01:07:41 +0000 (0:00:01.637) 0:05:01.962 ****** 2025-09-19 01:10:55.262961 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 01:10:55.262967 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 01:10:55.262971 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 01:10:55.262975 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 01:10:55.262978 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 01:10:55.262982 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 01:10:55.262986 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 01:10:55.262990 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.262994 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 01:10:55.262997 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263004 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 01:10:55.263008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263011 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 01:10:55.263015 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 01:10:55.263019 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 01:10:55.263023 | orchestrator | 2025-09-19 01:10:55.263026 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-19 01:10:55.263030 | orchestrator | Friday 19 September 2025 01:07:45 +0000 (0:00:03.669) 0:05:05.631 ****** 2025-09-19 01:10:55.263034 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263037 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263041 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.263045 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263048 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263052 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263056 | orchestrator | 2025-09-19 01:10:55.263060 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-19 01:10:55.263063 | orchestrator | Friday 19 September 2025 01:07:46 +0000 (0:00:00.793) 0:05:06.425 ****** 2025-09-19 01:10:55.263067 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 01:10:55.263071 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 01:10:55.263077 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 01:10:55.263080 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 01:10:55.263084 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 01:10:55.263088 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 01:10:55.263092 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 01:10:55.263095 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 01:10:55.263099 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 01:10:55.263103 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 01:10:55.263109 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263113 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 01:10:55.263117 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263120 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 01:10:55.263124 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263128 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 01:10:55.263132 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 01:10:55.263135 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 01:10:55.263139 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 01:10:55.263143 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 01:10:55.263146 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 01:10:55.263150 | orchestrator | 2025-09-19 01:10:55.263154 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-19 01:10:55.263158 | orchestrator | Friday 19 September 2025 01:07:51 +0000 (0:00:05.214) 0:05:11.639 ****** 2025-09-19 01:10:55.263161 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 01:10:55.263165 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 01:10:55.263169 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 01:10:55.263173 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 01:10:55.263176 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 01:10:55.263180 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 01:10:55.263184 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 01:10:55.263191 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 01:10:55.263195 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 01:10:55.263198 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 01:10:55.263202 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 01:10:55.263206 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 01:10:55.263210 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 01:10:55.263213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263217 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 01:10:55.263221 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263225 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 01:10:55.263228 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 01:10:55.263232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263236 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 01:10:55.263240 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 01:10:55.263243 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 01:10:55.263250 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 01:10:55.263255 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 01:10:55.263259 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 01:10:55.263263 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 01:10:55.263266 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 01:10:55.263270 | orchestrator | 2025-09-19 01:10:55.263274 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-19 01:10:55.263278 | orchestrator | Friday 19 September 2025 01:07:58 +0000 (0:00:07.230) 0:05:18.870 ****** 2025-09-19 01:10:55.263281 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263285 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263289 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.263293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263300 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263304 | orchestrator | 2025-09-19 01:10:55.263307 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-19 01:10:55.263311 | orchestrator | Friday 19 September 2025 01:07:59 +0000 (0:00:00.690) 0:05:19.560 ****** 2025-09-19 01:10:55.263315 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263319 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263322 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.263326 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263330 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263333 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263337 | orchestrator | 2025-09-19 01:10:55.263341 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-19 01:10:55.263345 | orchestrator | Friday 19 September 2025 01:08:00 +0000 (0:00:00.969) 0:05:20.530 ****** 2025-09-19 01:10:55.263348 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263352 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263356 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263359 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.263363 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.263367 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.263371 | orchestrator | 2025-09-19 01:10:55.263374 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-19 01:10:55.263378 | orchestrator | Friday 19 September 2025 01:08:01 +0000 (0:00:01.814) 0:05:22.345 ****** 2025-09-19 01:10:55.263382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.263388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.263395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.263399 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.263409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.263413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.263417 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 01:10:55.263431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 01:10:55.263439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.263443 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.263447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.263451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.263455 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.263465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.263472 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 01:10:55.263482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 01:10:55.263486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263489 | orchestrator | 2025-09-19 01:10:55.263493 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-19 01:10:55.263497 | orchestrator | Friday 19 September 2025 01:08:03 +0000 (0:00:01.664) 0:05:24.009 ****** 2025-09-19 01:10:55.263501 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 01:10:55.263504 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 01:10:55.263508 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263512 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 01:10:55.263516 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 01:10:55.263520 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263523 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 01:10:55.263527 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 01:10:55.263531 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.263534 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 01:10:55.263538 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 01:10:55.263542 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263545 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 01:10:55.263549 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 01:10:55.263553 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263557 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 01:10:55.263560 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 01:10:55.263564 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263568 | orchestrator | 2025-09-19 01:10:55.263572 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-19 01:10:55.263575 | orchestrator | Friday 19 September 2025 01:08:04 +0000 (0:00:00.671) 0:05:24.680 ****** 2025-09-19 01:10:55.263579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 01:10:55.263674 | orchestrator | 2025-09-19 01:10:55.263677 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 01:10:55.263681 | orchestrator | Friday 19 September 2025 01:08:07 +0000 (0:00:03.237) 0:05:27.918 ****** 2025-09-19 01:10:55.263685 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263689 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263693 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.263698 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.263702 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.263706 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.263709 | orchestrator | 2025-09-19 01:10:55.263713 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 01:10:55.263717 | orchestrator | Friday 19 September 2025 01:08:08 +0000 (0:00:00.618) 0:05:28.537 ****** 2025-09-19 01:10:55.263720 | orchestrator | 2025-09-19 01:10:55.263724 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 01:10:55.263728 | orchestrator | Friday 19 September 2025 01:08:08 +0000 (0:00:00.131) 0:05:28.669 ****** 2025-09-19 01:10:55.263732 | orchestrator | 2025-09-19 01:10:55.263735 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 01:10:55.263739 | orchestrator | Friday 19 September 2025 01:08:08 +0000 (0:00:00.382) 0:05:29.051 ****** 2025-09-19 01:10:55.263743 | orchestrator | 2025-09-19 01:10:55.263746 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 01:10:55.263753 | orchestrator | Friday 19 September 2025 01:08:08 +0000 (0:00:00.153) 0:05:29.205 ****** 2025-09-19 01:10:55.263757 | orchestrator | 2025-09-19 01:10:55.263761 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 01:10:55.263764 | orchestrator | Friday 19 September 2025 01:08:08 +0000 (0:00:00.157) 0:05:29.363 ****** 2025-09-19 01:10:55.263768 | orchestrator | 2025-09-19 01:10:55.263772 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 01:10:55.263776 | orchestrator | Friday 19 September 2025 01:08:09 +0000 (0:00:00.146) 0:05:29.510 ****** 2025-09-19 01:10:55.263779 | orchestrator | 2025-09-19 01:10:55.263783 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-19 01:10:55.263787 | orchestrator | Friday 19 September 2025 01:08:09 +0000 (0:00:00.140) 0:05:29.650 ****** 2025-09-19 01:10:55.263790 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.263794 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.263798 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.263801 | orchestrator | 2025-09-19 01:10:55.263805 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-19 01:10:55.263809 | orchestrator | Friday 19 September 2025 01:08:16 +0000 (0:00:07.253) 0:05:36.904 ****** 2025-09-19 01:10:55.263813 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.263816 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.263820 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.263824 | orchestrator | 2025-09-19 01:10:55.263827 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-19 01:10:55.263831 | orchestrator | Friday 19 September 2025 01:08:28 +0000 (0:00:11.860) 0:05:48.764 ****** 2025-09-19 01:10:55.263835 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.263839 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.263842 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.263846 | orchestrator | 2025-09-19 01:10:55.263850 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-19 01:10:55.263853 | orchestrator | Friday 19 September 2025 01:08:48 +0000 (0:00:19.907) 0:06:08.672 ****** 2025-09-19 01:10:55.263857 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.263861 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.263865 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.263868 | orchestrator | 2025-09-19 01:10:55.263872 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-19 01:10:55.263876 | orchestrator | Friday 19 September 2025 01:09:19 +0000 (0:00:31.368) 0:06:40.041 ****** 2025-09-19 01:10:55.263880 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-19 01:10:55.263884 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-19 01:10:55.263887 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-19 01:10:55.263891 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.263895 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.263898 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.263902 | orchestrator | 2025-09-19 01:10:55.263909 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-19 01:10:55.263913 | orchestrator | Friday 19 September 2025 01:09:25 +0000 (0:00:06.239) 0:06:46.280 ****** 2025-09-19 01:10:55.263917 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.263921 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.263924 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.263928 | orchestrator | 2025-09-19 01:10:55.263944 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-19 01:10:55.263948 | orchestrator | Friday 19 September 2025 01:09:26 +0000 (0:00:01.032) 0:06:47.313 ****** 2025-09-19 01:10:55.263952 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:10:55.263958 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:10:55.263962 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:10:55.263966 | orchestrator | 2025-09-19 01:10:55.263969 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-19 01:10:55.263973 | orchestrator | Friday 19 September 2025 01:09:45 +0000 (0:00:18.934) 0:07:06.247 ****** 2025-09-19 01:10:55.263977 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263980 | orchestrator | 2025-09-19 01:10:55.263984 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-19 01:10:55.263988 | orchestrator | Friday 19 September 2025 01:09:45 +0000 (0:00:00.125) 0:07:06.372 ****** 2025-09-19 01:10:55.263991 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.263995 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.263999 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264003 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264006 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264010 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-19 01:10:55.264014 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 01:10:55.264018 | orchestrator | 2025-09-19 01:10:55.264023 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-19 01:10:55.264027 | orchestrator | Friday 19 September 2025 01:10:08 +0000 (0:00:22.338) 0:07:28.710 ****** 2025-09-19 01:10:55.264031 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264035 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.264038 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.264042 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264046 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264049 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.264053 | orchestrator | 2025-09-19 01:10:55.264057 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-19 01:10:55.264060 | orchestrator | Friday 19 September 2025 01:10:16 +0000 (0:00:08.211) 0:07:36.921 ****** 2025-09-19 01:10:55.264064 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.264068 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264071 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264075 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.264079 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264082 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-09-19 01:10:55.264086 | orchestrator | 2025-09-19 01:10:55.264090 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 01:10:55.264093 | orchestrator | Friday 19 September 2025 01:10:20 +0000 (0:00:03.672) 0:07:40.593 ****** 2025-09-19 01:10:55.264097 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 01:10:55.264101 | orchestrator | 2025-09-19 01:10:55.264105 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 01:10:55.264108 | orchestrator | Friday 19 September 2025 01:10:32 +0000 (0:00:12.118) 0:07:52.712 ****** 2025-09-19 01:10:55.264112 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 01:10:55.264116 | orchestrator | 2025-09-19 01:10:55.264119 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-19 01:10:55.264123 | orchestrator | Friday 19 September 2025 01:10:33 +0000 (0:00:01.314) 0:07:54.027 ****** 2025-09-19 01:10:55.264127 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.264130 | orchestrator | 2025-09-19 01:10:55.264134 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-19 01:10:55.264138 | orchestrator | Friday 19 September 2025 01:10:34 +0000 (0:00:01.321) 0:07:55.349 ****** 2025-09-19 01:10:55.264142 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 01:10:55.264145 | orchestrator | 2025-09-19 01:10:55.264149 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-19 01:10:55.264156 | orchestrator | Friday 19 September 2025 01:10:46 +0000 (0:00:11.074) 0:08:06.423 ****** 2025-09-19 01:10:55.264160 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:10:55.264163 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:10:55.264167 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:10:55.264171 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:10:55.264175 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:10:55.264178 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:10:55.264182 | orchestrator | 2025-09-19 01:10:55.264186 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-19 01:10:55.264189 | orchestrator | 2025-09-19 01:10:55.264193 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-19 01:10:55.264197 | orchestrator | Friday 19 September 2025 01:10:47 +0000 (0:00:01.754) 0:08:08.177 ****** 2025-09-19 01:10:55.264201 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:10:55.264204 | orchestrator | changed: [testbed-node-1] 2025-09-19 01:10:55.264208 | orchestrator | changed: [testbed-node-2] 2025-09-19 01:10:55.264212 | orchestrator | 2025-09-19 01:10:55.264215 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-19 01:10:55.264219 | orchestrator | 2025-09-19 01:10:55.264223 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-19 01:10:55.264227 | orchestrator | Friday 19 September 2025 01:10:48 +0000 (0:00:00.923) 0:08:09.101 ****** 2025-09-19 01:10:55.264230 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264241 | orchestrator | 2025-09-19 01:10:55.264247 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-19 01:10:55.264251 | orchestrator | 2025-09-19 01:10:55.264255 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-19 01:10:55.264259 | orchestrator | Friday 19 September 2025 01:10:49 +0000 (0:00:00.701) 0:08:09.802 ****** 2025-09-19 01:10:55.264262 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-19 01:10:55.264266 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 01:10:55.264270 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 01:10:55.264274 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-19 01:10:55.264277 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-19 01:10:55.264281 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-19 01:10:55.264285 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-19 01:10:55.264288 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 01:10:55.264292 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 01:10:55.264296 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-19 01:10:55.264299 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-19 01:10:55.264303 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-19 01:10:55.264307 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:10:55.264310 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-19 01:10:55.264314 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 01:10:55.264318 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 01:10:55.264323 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-19 01:10:55.264327 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-19 01:10:55.264331 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-19 01:10:55.264335 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:10:55.264338 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-19 01:10:55.264342 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 01:10:55.264349 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 01:10:55.264353 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-19 01:10:55.264356 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-19 01:10:55.264360 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-19 01:10:55.264364 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:10:55.264367 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-19 01:10:55.264371 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 01:10:55.264375 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 01:10:55.264378 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-19 01:10:55.264382 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-19 01:10:55.264386 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-19 01:10:55.264389 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264397 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-19 01:10:55.264400 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 01:10:55.264404 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 01:10:55.264408 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-19 01:10:55.264411 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-19 01:10:55.264415 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-19 01:10:55.264419 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264422 | orchestrator | 2025-09-19 01:10:55.264426 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-19 01:10:55.264430 | orchestrator | 2025-09-19 01:10:55.264433 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-19 01:10:55.264437 | orchestrator | Friday 19 September 2025 01:10:50 +0000 (0:00:01.318) 0:08:11.120 ****** 2025-09-19 01:10:55.264441 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-19 01:10:55.264445 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-19 01:10:55.264448 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264452 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-19 01:10:55.264456 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-19 01:10:55.264459 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264463 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-19 01:10:55.264467 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-19 01:10:55.264470 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264474 | orchestrator | 2025-09-19 01:10:55.264478 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-19 01:10:55.264481 | orchestrator | 2025-09-19 01:10:55.264485 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-19 01:10:55.264489 | orchestrator | Friday 19 September 2025 01:10:51 +0000 (0:00:00.536) 0:08:11.657 ****** 2025-09-19 01:10:55.264492 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264496 | orchestrator | 2025-09-19 01:10:55.264500 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-19 01:10:55.264503 | orchestrator | 2025-09-19 01:10:55.264510 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-19 01:10:55.264514 | orchestrator | Friday 19 September 2025 01:10:52 +0000 (0:00:00.832) 0:08:12.489 ****** 2025-09-19 01:10:55.264520 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:10:55.264524 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:10:55.264527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:10:55.264531 | orchestrator | 2025-09-19 01:10:55.264535 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:10:55.264542 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:10:55.264546 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-19 01:10:55.264550 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 01:10:55.264553 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 01:10:55.264557 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 01:10:55.264561 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 01:10:55.264567 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-19 01:10:55.264570 | orchestrator | 2025-09-19 01:10:55.264574 | orchestrator | 2025-09-19 01:10:55.264578 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:10:55.264582 | orchestrator | Friday 19 September 2025 01:10:52 +0000 (0:00:00.426) 0:08:12.915 ****** 2025-09-19 01:10:55.264585 | orchestrator | =============================================================================== 2025-09-19 01:10:55.264589 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.28s 2025-09-19 01:10:55.264593 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.37s 2025-09-19 01:10:55.264597 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.34s 2025-09-19 01:10:55.264600 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.32s 2025-09-19 01:10:55.264604 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.91s 2025-09-19 01:10:55.264608 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.93s 2025-09-19 01:10:55.264611 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.01s 2025-09-19 01:10:55.264615 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.38s 2025-09-19 01:10:55.264619 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.86s 2025-09-19 01:10:55.264622 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.84s 2025-09-19 01:10:55.264626 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.48s 2025-09-19 01:10:55.264630 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.12s 2025-09-19 01:10:55.264633 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.86s 2025-09-19 01:10:55.264637 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.08s 2025-09-19 01:10:55.264641 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.07s 2025-09-19 01:10:55.264644 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.93s 2025-09-19 01:10:55.264648 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.21s 2025-09-19 01:10:55.264652 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.64s 2025-09-19 01:10:55.264656 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.25s 2025-09-19 01:10:55.264659 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.23s 2025-09-19 01:10:58.300146 | orchestrator | 2025-09-19 01:10:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:01.345224 | orchestrator | 2025-09-19 01:11:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:04.381927 | orchestrator | 2025-09-19 01:11:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:07.428719 | orchestrator | 2025-09-19 01:11:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:10.477289 | orchestrator | 2025-09-19 01:11:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:13.516134 | orchestrator | 2025-09-19 01:11:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:16.561737 | orchestrator | 2025-09-19 01:11:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:19.610171 | orchestrator | 2025-09-19 01:11:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:22.655334 | orchestrator | 2025-09-19 01:11:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:25.701055 | orchestrator | 2025-09-19 01:11:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:28.741622 | orchestrator | 2025-09-19 01:11:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:31.783382 | orchestrator | 2025-09-19 01:11:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:34.825771 | orchestrator | 2025-09-19 01:11:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:37.863081 | orchestrator | 2025-09-19 01:11:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:40.905241 | orchestrator | 2025-09-19 01:11:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:43.945673 | orchestrator | 2025-09-19 01:11:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:46.989404 | orchestrator | 2025-09-19 01:11:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:50.043462 | orchestrator | 2025-09-19 01:11:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:53.082391 | orchestrator | 2025-09-19 01:11:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 01:11:56.118305 | orchestrator | 2025-09-19 01:11:56.434828 | orchestrator | 2025-09-19 01:11:56.442231 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Sep 19 01:11:56 UTC 2025 2025-09-19 01:11:56.442299 | orchestrator | 2025-09-19 01:11:56.909261 | orchestrator | ok: Runtime: 0:35:23.875730 2025-09-19 01:11:57.162459 | 2025-09-19 01:11:57.162606 | TASK [Bootstrap services] 2025-09-19 01:11:57.831558 | orchestrator | 2025-09-19 01:11:57.831742 | orchestrator | # BOOTSTRAP 2025-09-19 01:11:57.831764 | orchestrator | 2025-09-19 01:11:57.831777 | orchestrator | + set -e 2025-09-19 01:11:57.831791 | orchestrator | + echo 2025-09-19 01:11:57.831804 | orchestrator | + echo '# BOOTSTRAP' 2025-09-19 01:11:57.831822 | orchestrator | + echo 2025-09-19 01:11:57.831866 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-19 01:11:57.840467 | orchestrator | + set -e 2025-09-19 01:11:57.840543 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-19 01:12:01.921404 | orchestrator | 2025-09-19 01:12:01 | INFO  | It takes a moment until task c5255ab0-8b65-4f0b-8c67-4575abc0d16c (flavor-manager) has been started and output is visible here. 2025-09-19 01:12:07.970562 | orchestrator | 2025-09-19 01:12:05 | INFO  | Flavor SCS-1V-4 created 2025-09-19 01:12:07.970711 | orchestrator | 2025-09-19 01:12:05 | INFO  | Flavor SCS-2V-8 created 2025-09-19 01:12:07.970729 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-4V-16 created 2025-09-19 01:12:07.970754 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-8V-32 created 2025-09-19 01:12:07.971342 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-1V-2 created 2025-09-19 01:12:07.971360 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-2V-4 created 2025-09-19 01:12:07.971371 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-4V-8 created 2025-09-19 01:12:07.971383 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-8V-16 created 2025-09-19 01:12:07.971458 | orchestrator | 2025-09-19 01:12:06 | INFO  | Flavor SCS-16V-32 created 2025-09-19 01:12:07.971472 | orchestrator | 2025-09-19 01:12:07 | INFO  | Flavor SCS-1V-8 created 2025-09-19 01:12:07.971483 | orchestrator | 2025-09-19 01:12:07 | INFO  | Flavor SCS-2V-16 created 2025-09-19 01:12:07.971494 | orchestrator | 2025-09-19 01:12:07 | INFO  | Flavor SCS-4V-32 created 2025-09-19 01:12:07.971505 | orchestrator | 2025-09-19 01:12:07 | INFO  | Flavor SCS-1L-1 created 2025-09-19 01:12:07.971516 | orchestrator | 2025-09-19 01:12:07 | INFO  | Flavor SCS-2V-4-20s created 2025-09-19 01:12:07.971527 | orchestrator | 2025-09-19 01:12:07 | INFO  | Flavor SCS-4V-16-100s created 2025-09-19 01:12:09.909975 | orchestrator | 2025-09-19 01:12:09 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-19 01:12:20.047264 | orchestrator | 2025-09-19 01:12:20 | INFO  | Task 36a9cbcb-e4e1-444f-ada3-34be00f50d79 (bootstrap-basic) was prepared for execution. 2025-09-19 01:12:20.047525 | orchestrator | 2025-09-19 01:12:20 | INFO  | It takes a moment until task 36a9cbcb-e4e1-444f-ada3-34be00f50d79 (bootstrap-basic) has been started and output is visible here. 2025-09-19 01:13:23.999574 | orchestrator | 2025-09-19 01:13:23.999664 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-19 01:13:23.999674 | orchestrator | 2025-09-19 01:13:23.999681 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 01:13:23.999692 | orchestrator | Friday 19 September 2025 01:12:24 +0000 (0:00:00.082) 0:00:00.082 ****** 2025-09-19 01:13:23.999701 | orchestrator | ok: [localhost] 2025-09-19 01:13:23.999709 | orchestrator | 2025-09-19 01:13:23.999716 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-19 01:13:23.999723 | orchestrator | Friday 19 September 2025 01:12:27 +0000 (0:00:02.903) 0:00:02.985 ****** 2025-09-19 01:13:23.999729 | orchestrator | ok: [localhost] 2025-09-19 01:13:23.999736 | orchestrator | 2025-09-19 01:13:23.999743 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-19 01:13:23.999750 | orchestrator | Friday 19 September 2025 01:12:35 +0000 (0:00:08.536) 0:00:11.522 ****** 2025-09-19 01:13:23.999756 | orchestrator | changed: [localhost] 2025-09-19 01:13:23.999763 | orchestrator | 2025-09-19 01:13:23.999770 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-19 01:13:23.999800 | orchestrator | Friday 19 September 2025 01:12:42 +0000 (0:00:07.184) 0:00:18.707 ****** 2025-09-19 01:13:23.999807 | orchestrator | ok: [localhost] 2025-09-19 01:13:23.999813 | orchestrator | 2025-09-19 01:13:23.999820 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-19 01:13:23.999827 | orchestrator | Friday 19 September 2025 01:12:50 +0000 (0:00:07.579) 0:00:26.286 ****** 2025-09-19 01:13:23.999833 | orchestrator | changed: [localhost] 2025-09-19 01:13:23.999840 | orchestrator | 2025-09-19 01:13:23.999846 | orchestrator | TASK [Create public network] *************************************************** 2025-09-19 01:13:23.999853 | orchestrator | Friday 19 September 2025 01:12:57 +0000 (0:00:06.693) 0:00:32.979 ****** 2025-09-19 01:13:23.999860 | orchestrator | changed: [localhost] 2025-09-19 01:13:23.999866 | orchestrator | 2025-09-19 01:13:23.999873 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-19 01:13:23.999879 | orchestrator | Friday 19 September 2025 01:13:04 +0000 (0:00:06.902) 0:00:39.882 ****** 2025-09-19 01:13:23.999886 | orchestrator | changed: [localhost] 2025-09-19 01:13:23.999893 | orchestrator | 2025-09-19 01:13:23.999899 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-19 01:13:23.999906 | orchestrator | Friday 19 September 2025 01:13:11 +0000 (0:00:07.256) 0:00:47.139 ****** 2025-09-19 01:13:23.999912 | orchestrator | changed: [localhost] 2025-09-19 01:13:23.999919 | orchestrator | 2025-09-19 01:13:23.999926 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-19 01:13:23.999932 | orchestrator | Friday 19 September 2025 01:13:15 +0000 (0:00:04.434) 0:00:51.574 ****** 2025-09-19 01:13:23.999939 | orchestrator | changed: [localhost] 2025-09-19 01:13:23.999945 | orchestrator | 2025-09-19 01:13:23.999952 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-19 01:13:23.999959 | orchestrator | Friday 19 September 2025 01:13:20 +0000 (0:00:04.446) 0:00:56.020 ****** 2025-09-19 01:13:23.999965 | orchestrator | ok: [localhost] 2025-09-19 01:13:23.999972 | orchestrator | 2025-09-19 01:13:23.999979 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:13:23.999985 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 01:13:23.999992 | orchestrator | 2025-09-19 01:13:23.999999 | orchestrator | 2025-09-19 01:13:24.000006 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:13:24.000012 | orchestrator | Friday 19 September 2025 01:13:23 +0000 (0:00:03.489) 0:00:59.509 ****** 2025-09-19 01:13:24.000019 | orchestrator | =============================================================================== 2025-09-19 01:13:24.000026 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.54s 2025-09-19 01:13:24.000032 | orchestrator | Get volume type local --------------------------------------------------- 7.58s 2025-09-19 01:13:24.000039 | orchestrator | Set public network to default ------------------------------------------- 7.26s 2025-09-19 01:13:24.000045 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.18s 2025-09-19 01:13:24.000052 | orchestrator | Create public network --------------------------------------------------- 6.90s 2025-09-19 01:13:24.000058 | orchestrator | Create volume type local ------------------------------------------------ 6.69s 2025-09-19 01:13:24.000071 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.45s 2025-09-19 01:13:24.000078 | orchestrator | Create public subnet ---------------------------------------------------- 4.43s 2025-09-19 01:13:24.000084 | orchestrator | Create manager role ----------------------------------------------------- 3.49s 2025-09-19 01:13:24.000091 | orchestrator | Gathering Facts --------------------------------------------------------- 2.90s 2025-09-19 01:13:26.280635 | orchestrator | 2025-09-19 01:13:26 | INFO  | It takes a moment until task dd7dd180-7e9f-4b48-964c-1624ae4465fb (image-manager) has been started and output is visible here. 2025-09-19 01:14:06.519575 | orchestrator | 2025-09-19 01:13:29 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-19 01:14:06.519717 | orchestrator | 2025-09-19 01:13:30 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-19 01:14:06.519736 | orchestrator | 2025-09-19 01:13:30 | INFO  | Importing image Cirros 0.6.2 2025-09-19 01:14:06.519748 | orchestrator | 2025-09-19 01:13:30 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 01:14:06.519760 | orchestrator | 2025-09-19 01:13:31 | INFO  | Waiting for image to leave queued state... 2025-09-19 01:14:06.519772 | orchestrator | 2025-09-19 01:13:33 | INFO  | Waiting for import to complete... 2025-09-19 01:14:06.519783 | orchestrator | 2025-09-19 01:13:43 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-19 01:14:06.519794 | orchestrator | 2025-09-19 01:13:44 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-19 01:14:06.519806 | orchestrator | 2025-09-19 01:13:44 | INFO  | Setting internal_version = 0.6.2 2025-09-19 01:14:06.519818 | orchestrator | 2025-09-19 01:13:44 | INFO  | Setting image_original_user = cirros 2025-09-19 01:14:06.519829 | orchestrator | 2025-09-19 01:13:44 | INFO  | Adding tag os:cirros 2025-09-19 01:14:06.519840 | orchestrator | 2025-09-19 01:13:44 | INFO  | Setting property architecture: x86_64 2025-09-19 01:14:06.519851 | orchestrator | 2025-09-19 01:13:44 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 01:14:06.519862 | orchestrator | 2025-09-19 01:13:44 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 01:14:06.519873 | orchestrator | 2025-09-19 01:13:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 01:14:06.519883 | orchestrator | 2025-09-19 01:13:45 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 01:14:06.519894 | orchestrator | 2025-09-19 01:13:45 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 01:14:06.519905 | orchestrator | 2025-09-19 01:13:45 | INFO  | Setting property os_distro: cirros 2025-09-19 01:14:06.519916 | orchestrator | 2025-09-19 01:13:45 | INFO  | Setting property replace_frequency: never 2025-09-19 01:14:06.519927 | orchestrator | 2025-09-19 01:13:46 | INFO  | Setting property uuid_validity: none 2025-09-19 01:14:06.519938 | orchestrator | 2025-09-19 01:13:46 | INFO  | Setting property provided_until: none 2025-09-19 01:14:06.519948 | orchestrator | 2025-09-19 01:13:46 | INFO  | Setting property image_description: Cirros 2025-09-19 01:14:06.519959 | orchestrator | 2025-09-19 01:13:46 | INFO  | Setting property image_name: Cirros 2025-09-19 01:14:06.519970 | orchestrator | 2025-09-19 01:13:46 | INFO  | Setting property internal_version: 0.6.2 2025-09-19 01:14:06.519981 | orchestrator | 2025-09-19 01:13:47 | INFO  | Setting property image_original_user: cirros 2025-09-19 01:14:06.519991 | orchestrator | 2025-09-19 01:13:47 | INFO  | Setting property os_version: 0.6.2 2025-09-19 01:14:06.520012 | orchestrator | 2025-09-19 01:13:47 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 01:14:06.520024 | orchestrator | 2025-09-19 01:13:47 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-19 01:14:06.520035 | orchestrator | 2025-09-19 01:13:48 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-19 01:14:06.520046 | orchestrator | 2025-09-19 01:13:48 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-19 01:14:06.520056 | orchestrator | 2025-09-19 01:13:48 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-19 01:14:06.520067 | orchestrator | 2025-09-19 01:13:48 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-19 01:14:06.520086 | orchestrator | 2025-09-19 01:13:48 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-19 01:14:06.520098 | orchestrator | 2025-09-19 01:13:48 | INFO  | Importing image Cirros 0.6.3 2025-09-19 01:14:06.520111 | orchestrator | 2025-09-19 01:13:48 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 01:14:06.520123 | orchestrator | 2025-09-19 01:13:48 | INFO  | Waiting for image to leave queued state... 2025-09-19 01:14:06.520141 | orchestrator | 2025-09-19 01:13:50 | INFO  | Waiting for import to complete... 2025-09-19 01:14:06.520167 | orchestrator | 2025-09-19 01:14:01 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-19 01:14:06.520207 | orchestrator | 2025-09-19 01:14:01 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-19 01:14:06.520231 | orchestrator | 2025-09-19 01:14:01 | INFO  | Setting internal_version = 0.6.3 2025-09-19 01:14:06.520251 | orchestrator | 2025-09-19 01:14:01 | INFO  | Setting image_original_user = cirros 2025-09-19 01:14:06.520272 | orchestrator | 2025-09-19 01:14:01 | INFO  | Adding tag os:cirros 2025-09-19 01:14:06.520293 | orchestrator | 2025-09-19 01:14:01 | INFO  | Setting property architecture: x86_64 2025-09-19 01:14:06.520339 | orchestrator | 2025-09-19 01:14:01 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 01:14:06.520357 | orchestrator | 2025-09-19 01:14:02 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 01:14:06.520370 | orchestrator | 2025-09-19 01:14:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 01:14:06.520383 | orchestrator | 2025-09-19 01:14:02 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 01:14:06.520395 | orchestrator | 2025-09-19 01:14:02 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 01:14:06.520408 | orchestrator | 2025-09-19 01:14:02 | INFO  | Setting property os_distro: cirros 2025-09-19 01:14:06.520421 | orchestrator | 2025-09-19 01:14:03 | INFO  | Setting property replace_frequency: never 2025-09-19 01:14:06.520433 | orchestrator | 2025-09-19 01:14:03 | INFO  | Setting property uuid_validity: none 2025-09-19 01:14:06.520446 | orchestrator | 2025-09-19 01:14:03 | INFO  | Setting property provided_until: none 2025-09-19 01:14:06.520458 | orchestrator | 2025-09-19 01:14:03 | INFO  | Setting property image_description: Cirros 2025-09-19 01:14:06.520471 | orchestrator | 2025-09-19 01:14:04 | INFO  | Setting property image_name: Cirros 2025-09-19 01:14:06.520483 | orchestrator | 2025-09-19 01:14:04 | INFO  | Setting property internal_version: 0.6.3 2025-09-19 01:14:06.520494 | orchestrator | 2025-09-19 01:14:04 | INFO  | Setting property image_original_user: cirros 2025-09-19 01:14:06.520504 | orchestrator | 2025-09-19 01:14:04 | INFO  | Setting property os_version: 0.6.3 2025-09-19 01:14:06.520515 | orchestrator | 2025-09-19 01:14:05 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 01:14:06.520526 | orchestrator | 2025-09-19 01:14:05 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-19 01:14:06.520537 | orchestrator | 2025-09-19 01:14:05 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-19 01:14:06.520547 | orchestrator | 2025-09-19 01:14:05 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-19 01:14:06.520558 | orchestrator | 2025-09-19 01:14:05 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-19 01:14:06.808011 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-19 01:14:09.128521 | orchestrator | 2025-09-19 01:14:09 | INFO  | date: 2025-09-18 2025-09-19 01:14:09.128615 | orchestrator | 2025-09-19 01:14:09 | INFO  | image: octavia-amphora-haproxy-2024.2.20250918.qcow2 2025-09-19 01:14:09.128687 | orchestrator | 2025-09-19 01:14:09 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250918.qcow2 2025-09-19 01:14:09.128710 | orchestrator | 2025-09-19 01:14:09 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250918.qcow2.CHECKSUM 2025-09-19 01:14:09.149307 | orchestrator | 2025-09-19 01:14:09 | INFO  | checksum: 2147040f9330099b18fc4c28cd7ab3ce14a45ccadc446bee266eb07ba356b387 2025-09-19 01:14:09.236719 | orchestrator | 2025-09-19 01:14:09 | INFO  | It takes a moment until task 69840bb6-9996-4031-a3c0-6c325e5a8aaa (image-manager) has been started and output is visible here. 2025-09-19 01:15:08.692298 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-19 01:15:08.692399 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-09-19 01:15:08.692412 | orchestrator | 2025-09-19 01:14:11 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-18' 2025-09-19 01:15:08.692426 | orchestrator | 2025-09-19 01:14:11 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250918.qcow2: 200 2025-09-19 01:15:08.692524 | orchestrator | 2025-09-19 01:14:11 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-18 2025-09-19 01:15:08.692532 | orchestrator | 2025-09-19 01:14:11 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250918.qcow2 2025-09-19 01:15:08.692541 | orchestrator | 2025-09-19 01:14:11 | INFO  | Waiting for image to leave queued state... 2025-09-19 01:15:08.692549 | orchestrator | 2025-09-19 01:14:13 | INFO  | Waiting for import to complete... 2025-09-19 01:15:08.692557 | orchestrator | 2025-09-19 01:14:23 | INFO  | Waiting for import to complete... 2025-09-19 01:15:08.692564 | orchestrator | 2025-09-19 01:14:34 | INFO  | Waiting for import to complete... 2025-09-19 01:15:08.692571 | orchestrator | 2025-09-19 01:14:44 | INFO  | Waiting for import to complete... 2025-09-19 01:15:08.692579 | orchestrator | 2025-09-19 01:14:54 | INFO  | Waiting for import to complete... 2025-09-19 01:15:08.692587 | orchestrator | 2025-09-19 01:15:04 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-18' successfully completed, reloading images 2025-09-19 01:15:08.692595 | orchestrator | 2025-09-19 01:15:04 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-18' 2025-09-19 01:15:08.692602 | orchestrator | 2025-09-19 01:15:04 | INFO  | Setting internal_version = 2025-09-18 2025-09-19 01:15:08.692610 | orchestrator | 2025-09-19 01:15:04 | INFO  | Setting image_original_user = ubuntu 2025-09-19 01:15:08.692618 | orchestrator | 2025-09-19 01:15:04 | INFO  | Adding tag amphora 2025-09-19 01:15:08.692626 | orchestrator | 2025-09-19 01:15:05 | INFO  | Adding tag os:ubuntu 2025-09-19 01:15:08.692633 | orchestrator | 2025-09-19 01:15:05 | INFO  | Setting property architecture: x86_64 2025-09-19 01:15:08.692660 | orchestrator | 2025-09-19 01:15:05 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 01:15:08.692676 | orchestrator | 2025-09-19 01:15:05 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 01:15:08.692684 | orchestrator | 2025-09-19 01:15:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 01:15:08.692691 | orchestrator | 2025-09-19 01:15:05 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 01:15:08.692698 | orchestrator | 2025-09-19 01:15:06 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 01:15:08.692705 | orchestrator | 2025-09-19 01:15:06 | INFO  | Setting property os_distro: ubuntu 2025-09-19 01:15:08.692713 | orchestrator | 2025-09-19 01:15:06 | INFO  | Setting property replace_frequency: quarterly 2025-09-19 01:15:08.692720 | orchestrator | 2025-09-19 01:15:06 | INFO  | Setting property uuid_validity: last-1 2025-09-19 01:15:08.692727 | orchestrator | 2025-09-19 01:15:06 | INFO  | Setting property provided_until: none 2025-09-19 01:15:08.692734 | orchestrator | 2025-09-19 01:15:06 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-19 01:15:08.692742 | orchestrator | 2025-09-19 01:15:07 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-19 01:15:08.692749 | orchestrator | 2025-09-19 01:15:07 | INFO  | Setting property internal_version: 2025-09-18 2025-09-19 01:15:08.692756 | orchestrator | 2025-09-19 01:15:07 | INFO  | Setting property image_original_user: ubuntu 2025-09-19 01:15:08.692763 | orchestrator | 2025-09-19 01:15:07 | INFO  | Setting property os_version: 2025-09-18 2025-09-19 01:15:08.692771 | orchestrator | 2025-09-19 01:15:07 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250918.qcow2 2025-09-19 01:15:08.692792 | orchestrator | 2025-09-19 01:15:08 | INFO  | Setting property image_build_date: 2025-09-18 2025-09-19 01:15:08.692800 | orchestrator | 2025-09-19 01:15:08 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-18' 2025-09-19 01:15:08.692807 | orchestrator | 2025-09-19 01:15:08 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-18' 2025-09-19 01:15:08.692814 | orchestrator | 2025-09-19 01:15:08 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-19 01:15:08.692822 | orchestrator | 2025-09-19 01:15:08 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-19 01:15:08.692830 | orchestrator | 2025-09-19 01:15:08 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-19 01:15:08.692837 | orchestrator | 2025-09-19 01:15:08 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-19 01:15:09.336698 | orchestrator | ok: Runtime: 0:03:11.645609 2025-09-19 01:15:09.403382 | 2025-09-19 01:15:09.403544 | TASK [Run checks] 2025-09-19 01:15:10.060794 | orchestrator | + set -e 2025-09-19 01:15:10.060949 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 01:15:10.060970 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 01:15:10.060988 | orchestrator | ++ INTERACTIVE=false 2025-09-19 01:15:10.061000 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 01:15:10.061011 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 01:15:10.061034 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 01:15:10.062256 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 01:15:10.066407 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 01:15:10.066467 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 01:15:10.066480 | orchestrator | + echo 2025-09-19 01:15:10.066492 | orchestrator | 2025-09-19 01:15:10.066502 | orchestrator | # CHECK 2025-09-19 01:15:10.066512 | orchestrator | 2025-09-19 01:15:10.066530 | orchestrator | + echo '# CHECK' 2025-09-19 01:15:10.066540 | orchestrator | + echo 2025-09-19 01:15:10.066559 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 01:15:10.067186 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 01:15:10.116717 | orchestrator | 2025-09-19 01:15:10.116788 | orchestrator | ## Containers @ testbed-manager 2025-09-19 01:15:10.116799 | orchestrator | 2025-09-19 01:15:10.116812 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 01:15:10.116821 | orchestrator | + echo 2025-09-19 01:15:10.116832 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-19 01:15:10.116842 | orchestrator | + echo 2025-09-19 01:15:10.116852 | orchestrator | + osism container testbed-manager ps 2025-09-19 01:15:12.247839 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 01:15:12.247998 | orchestrator | f39d677d18a5 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-09-19 01:15:12.248025 | orchestrator | e4c14e6ac6fc registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-09-19 01:15:12.248037 | orchestrator | 052e6e07feb0 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-19 01:15:12.248056 | orchestrator | caa1a37c73c5 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-19 01:15:12.248067 | orchestrator | ef1622ce3284 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-09-19 01:15:12.248079 | orchestrator | d71c4ab42710 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 19 minutes ago Up 18 minutes cephclient 2025-09-19 01:15:12.248094 | orchestrator | 3a6405730abc registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-09-19 01:15:12.248105 | orchestrator | 999f7ef9cc0d registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-09-19 01:15:12.248140 | orchestrator | c1500ce80835 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-09-19 01:15:12.248152 | orchestrator | 1a2013bbc768 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-09-19 01:15:12.248163 | orchestrator | e55188cd7aa5 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-09-19 01:15:12.248174 | orchestrator | 756d64673e93 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-09-19 01:15:12.248185 | orchestrator | 8b1389395b59 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 54 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-19 01:15:12.248201 | orchestrator | 7902ae7af4b7 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 59 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-09-19 01:15:12.248460 | orchestrator | 504074af060b registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-09-19 01:15:12.248481 | orchestrator | 202b5429b8a2 registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-ansible 2025-09-19 01:15:12.248492 | orchestrator | 8b39dbbf6416 registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-09-19 01:15:12.248503 | orchestrator | ca4fd2308541 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-09-19 01:15:12.248514 | orchestrator | 616402bef461 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 59 minutes ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-19 01:15:12.248526 | orchestrator | daefeb0f8b77 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-19 01:15:12.248537 | orchestrator | cd11d5575f58 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 59 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-19 01:15:12.248548 | orchestrator | c5e797f2948b registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-09-19 01:15:12.248582 | orchestrator | e263d2ad276c registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 59 minutes ago Up 40 minutes (healthy) osismclient 2025-09-19 01:15:12.248603 | orchestrator | 53d43e7ed8a5 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-09-19 01:15:12.248833 | orchestrator | 699ee76b0ed6 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-09-19 01:15:12.248850 | orchestrator | ecadf8255367 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 59 minutes ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-19 01:15:12.248862 | orchestrator | 54dce972ee60 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 59 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-09-19 01:15:12.248873 | orchestrator | 70b594c07201 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-09-19 01:15:12.248884 | orchestrator | aa8fb520fc87 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-19 01:15:12.452706 | orchestrator | 2025-09-19 01:15:12.452788 | orchestrator | ## Images @ testbed-manager 2025-09-19 01:15:12.452802 | orchestrator | 2025-09-19 01:15:12.452814 | orchestrator | + echo 2025-09-19 01:15:12.452826 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-19 01:15:12.452837 | orchestrator | + echo 2025-09-19 01:15:12.452848 | orchestrator | + osism container testbed-manager images 2025-09-19 01:15:14.400645 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 01:15:14.400742 | orchestrator | registry.osism.tech/osism/osism-frontend latest 7bc80eb2be93 About an hour ago 236MB 2025-09-19 01:15:14.400757 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 c8e455cdb955 5 hours ago 243MB 2025-09-19 01:15:14.400788 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 5 weeks ago 11.5MB 2025-09-19 01:15:14.400798 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 2 months ago 571MB 2025-09-19 01:15:14.400808 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 01:15:14.400818 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 01:15:14.400827 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 01:15:14.400836 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 2 months ago 891MB 2025-09-19 01:15:14.400846 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 2 months ago 360MB 2025-09-19 01:15:14.400855 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 2 months ago 456MB 2025-09-19 01:15:14.400884 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 01:15:14.400895 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 01:15:14.400904 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 2 months ago 575MB 2025-09-19 01:15:14.400914 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 2 months ago 535MB 2025-09-19 01:15:14.400924 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 2 months ago 308MB 2025-09-19 01:15:14.400933 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 2 months ago 1.21GB 2025-09-19 01:15:14.400943 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 2 months ago 310MB 2025-09-19 01:15:14.400952 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-19 01:15:14.400961 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-09-19 01:15:14.400971 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 3 months ago 329MB 2025-09-19 01:15:14.400980 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 4 months ago 453MB 2025-09-19 01:15:14.400990 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-19 01:15:14.400999 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 12 months ago 300MB 2025-09-19 01:15:14.401013 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-19 01:15:14.693962 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 01:15:14.695233 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 01:15:14.758074 | orchestrator | 2025-09-19 01:15:14.758175 | orchestrator | ## Containers @ testbed-node-0 2025-09-19 01:15:14.758190 | orchestrator | 2025-09-19 01:15:14.758201 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 01:15:14.758213 | orchestrator | + echo 2025-09-19 01:15:14.758225 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-19 01:15:14.758237 | orchestrator | + echo 2025-09-19 01:15:14.758247 | orchestrator | + osism container testbed-node-0 ps 2025-09-19 01:15:17.067132 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 01:15:17.067247 | orchestrator | a71cd8b540b7 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-09-19 01:15:17.067263 | orchestrator | 7a53adfb4ba6 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-19 01:15:17.067276 | orchestrator | dfd986aa5fac registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-19 01:15:17.067287 | orchestrator | 1ead1c9c5eb4 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-19 01:15:17.067298 | orchestrator | 3fa5cfb9c2e4 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-09-19 01:15:17.067312 | orchestrator | d65e1fbb8592 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-19 01:15:17.067351 | orchestrator | 925e8ad5b1ac registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-19 01:15:17.067363 | orchestrator | 575428d2ef47 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-19 01:15:17.067375 | orchestrator | 44383bec43fb registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-09-19 01:15:17.067407 | orchestrator | 10417859fda6 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-19 01:15:17.067418 | orchestrator | ef9805fe6be9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-19 01:15:17.067429 | orchestrator | 76a0c2eb22f0 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-19 01:15:17.067461 | orchestrator | f924f2931aaf registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-19 01:15:17.067473 | orchestrator | bce819a51b6c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-09-19 01:15:17.067485 | orchestrator | 5da58c903c24 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-09-19 01:15:17.067495 | orchestrator | 978c4d825b00 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-09-19 01:15:17.067506 | orchestrator | df3915d8ba0e registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-09-19 01:15:17.067516 | orchestrator | 1f670824450d registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-09-19 01:15:17.067527 | orchestrator | 4fe4f135b96e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-09-19 01:15:17.067557 | orchestrator | 5fcd8135bc02 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-09-19 01:15:17.067569 | orchestrator | eb8ddcfd3a2e registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-09-19 01:15:17.067579 | orchestrator | 918da9803e54 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-09-19 01:15:17.067590 | orchestrator | 6eb6fffa4e8f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-09-19 01:15:17.067600 | orchestrator | 33bbcc48eb32 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-09-19 01:15:17.067628 | orchestrator | d680085f6e62 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-09-19 01:15:17.067640 | orchestrator | 440ed1448e33 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-09-19 01:15:17.067651 | orchestrator | 8ec1aa1003ef registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-09-19 01:15:17.067662 | orchestrator | bcf53df73bbb registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-19 01:15:17.067677 | orchestrator | 1cdccaf247b1 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-09-19 01:15:17.067689 | orchestrator | c2c1b1e9b9c7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-09-19 01:15:17.067699 | orchestrator | a013c27b9809 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-09-19 01:15:17.067710 | orchestrator | 6d88181b564f registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 22 minutes ago Up 21 minutes (healthy) mariadb 2025-09-19 01:15:17.067721 | orchestrator | d1aea32ec4e8 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2025-09-19 01:15:17.067731 | orchestrator | bb419f287157 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-09-19 01:15:17.067742 | orchestrator | 510e28c4484b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-09-19 01:15:17.067753 | orchestrator | b25a2617379f registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-09-19 01:15:17.067768 | orchestrator | bf06b032db82 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-09-19 01:15:17.067779 | orchestrator | 1c2296e8d946 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-09-19 01:15:17.067790 | orchestrator | 05acc1d8c4be registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-09-19 01:15:17.067800 | orchestrator | 1c5dd7d10aae registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-09-19 01:15:17.067818 | orchestrator | 9646976507f9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-09-19 01:15:17.067829 | orchestrator | 016ca5f07995 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-09-19 01:15:17.067846 | orchestrator | 1701691ebc8f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-09-19 01:15:17.067857 | orchestrator | d3748a6cfa34 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2025-09-19 01:15:17.067868 | orchestrator | 9a95b4cae216 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-09-19 01:15:17.067879 | orchestrator | eda5145c7047 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-09-19 01:15:17.067889 | orchestrator | 7eb5664d4ad5 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-09-19 01:15:17.067900 | orchestrator | a3cf6e87fcf0 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-09-19 01:15:17.067911 | orchestrator | a0dde855b59b registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-09-19 01:15:17.067922 | orchestrator | 82236a2a813c registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-09-19 01:15:17.067932 | orchestrator | 185e2f0b9ca5 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-09-19 01:15:17.067943 | orchestrator | 6ebf47a506dd registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-09-19 01:15:17.418627 | orchestrator | 2025-09-19 01:15:17.418735 | orchestrator | ## Images @ testbed-node-0 2025-09-19 01:15:17.418750 | orchestrator | 2025-09-19 01:15:17.418761 | orchestrator | + echo 2025-09-19 01:15:17.418773 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-19 01:15:17.418785 | orchestrator | + echo 2025-09-19 01:15:17.418796 | orchestrator | + osism container testbed-node-0 images 2025-09-19 01:15:19.580214 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 01:15:19.580296 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 01:15:19.580310 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 01:15:19.580322 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 01:15:19.580339 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 01:15:19.580359 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 01:15:19.580379 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 01:15:19.580397 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 01:15:19.580416 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 01:15:19.580428 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 01:15:19.580524 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 01:15:19.580539 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 01:15:19.580550 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 01:15:19.580561 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 01:15:19.580571 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 01:15:19.580582 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 01:15:19.580593 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 01:15:19.580604 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 01:15:19.580629 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 01:15:19.580640 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 01:15:19.580651 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 01:15:19.580662 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 01:15:19.580672 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 01:15:19.580683 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 01:15:19.580693 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 01:15:19.580704 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 01:15:19.580714 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 01:15:19.580725 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 2 months ago 1.04GB 2025-09-19 01:15:19.580735 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 2 months ago 1.04GB 2025-09-19 01:15:19.580746 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 01:15:19.580757 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 01:15:19.580767 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 01:15:19.580795 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 01:15:19.580809 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 01:15:19.580821 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 01:15:19.580833 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 01:15:19.580852 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 01:15:19.580865 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 01:15:19.580878 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 01:15:19.580890 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 01:15:19.580903 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 01:15:19.580916 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 01:15:19.580928 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 01:15:19.580945 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 01:15:19.580958 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 01:15:19.580971 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 01:15:19.580983 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 01:15:19.580995 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 01:15:19.581008 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 01:15:19.581020 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 01:15:19.581032 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 01:15:19.581045 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 01:15:19.581057 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 01:15:19.581071 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 2 months ago 1.11GB 2025-09-19 01:15:19.581083 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 2 months ago 1.11GB 2025-09-19 01:15:19.581095 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 01:15:19.581108 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 01:15:19.581120 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 01:15:19.581132 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 01:15:19.581145 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 2 months ago 1.04GB 2025-09-19 01:15:19.581157 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 2 months ago 1.04GB 2025-09-19 01:15:19.581170 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 2 months ago 1.04GB 2025-09-19 01:15:19.581187 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 2 months ago 1.04GB 2025-09-19 01:15:19.581198 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 01:15:19.776283 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 01:15:19.776421 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 01:15:19.826602 | orchestrator | 2025-09-19 01:15:19.826686 | orchestrator | ## Containers @ testbed-node-1 2025-09-19 01:15:19.826701 | orchestrator | 2025-09-19 01:15:19.826713 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 01:15:19.826724 | orchestrator | + echo 2025-09-19 01:15:19.826736 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-19 01:15:19.826748 | orchestrator | + echo 2025-09-19 01:15:19.826759 | orchestrator | + osism container testbed-node-1 ps 2025-09-19 01:15:21.881007 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 01:15:21.881058 | orchestrator | 73ec023a35a5 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-09-19 01:15:21.881065 | orchestrator | 2e64df905396 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-19 01:15:21.881079 | orchestrator | a5f77d33ee1a registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 01:15:21.881084 | orchestrator | 499c3e03d4e7 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-19 01:15:21.881089 | orchestrator | 0f8cfda740c0 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-19 01:15:21.881094 | orchestrator | dcdc4c8a1f0c registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-19 01:15:21.881098 | orchestrator | 887de226992d registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-19 01:15:21.881103 | orchestrator | 9025df7c860f registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-19 01:15:21.881107 | orchestrator | 069c6aff5f65 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-09-19 01:15:21.881113 | orchestrator | c1f681ae4210 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-19 01:15:21.881118 | orchestrator | 525c2502a7fc registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-19 01:15:21.881122 | orchestrator | cf987f2c6a64 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-19 01:15:21.881127 | orchestrator | 3a41d7913a4f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-19 01:15:21.881132 | orchestrator | 361664500b24 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-09-19 01:15:21.881183 | orchestrator | 93f55f5dc2ae registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-09-19 01:15:21.881188 | orchestrator | 3d480272f246 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-09-19 01:15:21.881193 | orchestrator | c768f1a8f545 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-09-19 01:15:21.881197 | orchestrator | b744126105e4 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-09-19 01:15:21.881202 | orchestrator | 7043245f06c8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-09-19 01:15:21.881213 | orchestrator | bceb21e4d6b7 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-09-19 01:15:21.881218 | orchestrator | 6ee1a5fc6dd8 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-09-19 01:15:21.881225 | orchestrator | 40f5e6b10f41 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-09-19 01:15:21.881230 | orchestrator | 591a42c44cf3 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-09-19 01:15:21.881237 | orchestrator | f84bf6c9fdd0 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-09-19 01:15:21.881242 | orchestrator | 54e3d9090ade registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-09-19 01:15:21.881247 | orchestrator | 62d01c87696e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-09-19 01:15:21.881251 | orchestrator | 80b3a0126b71 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-09-19 01:15:21.881256 | orchestrator | 9c5bce982425 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-19 01:15:21.881261 | orchestrator | d1784b73139d registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-09-19 01:15:21.881265 | orchestrator | 5ba790ad4931 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-09-19 01:15:21.881270 | orchestrator | feb0420d9998 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-09-19 01:15:21.881274 | orchestrator | 4ebf35dc949f registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-09-19 01:15:21.881282 | orchestrator | 42079484284b registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-09-19 01:15:21.881286 | orchestrator | 5232d43a49b4 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-09-19 01:15:21.881291 | orchestrator | fe3b763797a5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-09-19 01:15:21.881295 | orchestrator | 3264bd84b681 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-09-19 01:15:21.881300 | orchestrator | 1d2d9b1e42fe registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-09-19 01:15:21.881304 | orchestrator | d4d925cc74f1 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-09-19 01:15:21.881309 | orchestrator | b969f7ecc520 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-09-19 01:15:21.881313 | orchestrator | ee8b9de23d82 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-09-19 01:15:21.881321 | orchestrator | fbf3fd55559f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-09-19 01:15:21.881326 | orchestrator | 06f927d0db67 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-09-19 01:15:21.881330 | orchestrator | 5987c9929e7a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-09-19 01:15:21.881335 | orchestrator | 63a48d2327f0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-09-19 01:15:21.881339 | orchestrator | ead47e0b5063 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-09-19 01:15:21.881344 | orchestrator | 61fca7f70db0 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-09-19 01:15:21.881349 | orchestrator | b7859d314c6f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-09-19 01:15:21.881355 | orchestrator | c33ea601634a registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-09-19 01:15:21.881360 | orchestrator | 7ce727645449 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-09-19 01:15:21.881365 | orchestrator | 6eab397a9062 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-09-19 01:15:21.881372 | orchestrator | f58681c5dcca registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-09-19 01:15:21.881377 | orchestrator | 3a136b67f61a registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-09-19 01:15:22.074387 | orchestrator | 2025-09-19 01:15:22.074490 | orchestrator | ## Images @ testbed-node-1 2025-09-19 01:15:22.074503 | orchestrator | 2025-09-19 01:15:22.074513 | orchestrator | + echo 2025-09-19 01:15:22.074524 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-19 01:15:22.074535 | orchestrator | + echo 2025-09-19 01:15:22.074544 | orchestrator | + osism container testbed-node-1 images 2025-09-19 01:15:24.183171 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 01:15:24.183273 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 01:15:24.183289 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 01:15:24.183300 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 01:15:24.183311 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 01:15:24.183322 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 01:15:24.183333 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 01:15:24.183344 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 01:15:24.183355 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 01:15:24.183365 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 01:15:24.183376 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 01:15:24.183387 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 01:15:24.183397 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 01:15:24.183408 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 01:15:24.183418 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 01:15:24.183429 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 01:15:24.183440 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 01:15:24.183495 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 01:15:24.183509 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 01:15:24.183520 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 01:15:24.183531 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 01:15:24.183542 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 01:15:24.183573 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 01:15:24.183584 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 01:15:24.183595 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 01:15:24.183605 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 01:15:24.183616 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 01:15:24.183627 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 01:15:24.183638 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 01:15:24.183648 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 01:15:24.183659 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 01:15:24.183670 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 01:15:24.183698 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 01:15:24.183727 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 01:15:24.183739 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 01:15:24.183750 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 01:15:24.183761 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 01:15:24.183776 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 01:15:24.183787 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 01:15:24.183797 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 01:15:24.183808 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 01:15:24.183819 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 01:15:24.183829 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 01:15:24.183839 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 01:15:24.183850 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 01:15:24.183860 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 01:15:24.183871 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 01:15:24.183881 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 01:15:24.183900 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 01:15:24.183910 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 01:15:24.183921 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 01:15:24.476726 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 01:15:24.477829 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 01:15:24.531905 | orchestrator | 2025-09-19 01:15:24.531986 | orchestrator | ## Containers @ testbed-node-2 2025-09-19 01:15:24.532001 | orchestrator | 2025-09-19 01:15:24.532015 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 01:15:24.532029 | orchestrator | + echo 2025-09-19 01:15:24.532043 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-19 01:15:24.532057 | orchestrator | + echo 2025-09-19 01:15:24.532071 | orchestrator | + osism container testbed-node-2 ps 2025-09-19 01:15:26.865927 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 01:15:26.866094 | orchestrator | b6e89ed5b19d registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-09-19 01:15:26.866113 | orchestrator | a1c84d26b408 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-19 01:15:26.866126 | orchestrator | b353836d9485 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 01:15:26.866137 | orchestrator | c837e16788fb registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-19 01:15:26.866148 | orchestrator | 45b349dee3a8 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-19 01:15:26.866159 | orchestrator | 802f530590ef registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-19 01:15:26.866170 | orchestrator | 03584d477568 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-19 01:15:26.866180 | orchestrator | d2a797e65b3c registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-19 01:15:26.866191 | orchestrator | c3effcf9dc0f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-09-19 01:15:26.866205 | orchestrator | 02f68302e803 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-19 01:15:26.866216 | orchestrator | 98ac7945b248 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-19 01:15:26.866227 | orchestrator | f6cba5dda4ea registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-19 01:15:26.866238 | orchestrator | dbf70f453c2e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-19 01:15:26.866270 | orchestrator | 7580adaa2e77 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-09-19 01:15:26.866281 | orchestrator | 37915dce833c registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-09-19 01:15:26.866308 | orchestrator | fb128200ae13 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-09-19 01:15:26.866319 | orchestrator | 876b6769a2c6 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-09-19 01:15:26.866330 | orchestrator | 696dec3cb9bd registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-09-19 01:15:26.866340 | orchestrator | 27c4fe547d7a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-09-19 01:15:26.866369 | orchestrator | 948fbcc97837 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-09-19 01:15:26.866380 | orchestrator | d8f9f327caa3 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-09-19 01:15:26.866391 | orchestrator | 50e4add39bc1 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-09-19 01:15:26.866402 | orchestrator | fd6d1edbe075 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-09-19 01:15:26.866413 | orchestrator | 4ae1ea284587 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-09-19 01:15:26.866424 | orchestrator | 14417f7e0e8a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-09-19 01:15:26.866434 | orchestrator | 549250e77d60 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-09-19 01:15:26.866445 | orchestrator | 899ae9c9d8ec registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-09-19 01:15:26.866478 | orchestrator | 244059f20f3c registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-19 01:15:26.866492 | orchestrator | 0b9222dc502d registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-09-19 01:15:26.866504 | orchestrator | 1ef288f39448 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-09-19 01:15:26.866518 | orchestrator | 333dc46007aa registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-09-19 01:15:26.866537 | orchestrator | 406e564d1111 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-09-19 01:15:26.866548 | orchestrator | 7faa9721f4b8 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-09-19 01:15:26.866559 | orchestrator | c2944bc53368 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-09-19 01:15:26.866570 | orchestrator | a3a32ffa92a0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-09-19 01:15:26.866580 | orchestrator | 32e322c7248e registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-09-19 01:15:26.866591 | orchestrator | 532818031e65 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-09-19 01:15:26.866602 | orchestrator | 524a38752d79 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-09-19 01:15:26.866612 | orchestrator | 21271189d8b1 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-09-19 01:15:26.866623 | orchestrator | de7806f43ded registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-09-19 01:15:26.866640 | orchestrator | 311f7260ea71 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-09-19 01:15:26.866652 | orchestrator | dac727b3aac2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-09-19 01:15:26.866663 | orchestrator | eb86e01af5a4 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-09-19 01:15:26.866674 | orchestrator | 1ced7f02289d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-09-19 01:15:26.866685 | orchestrator | 82164531105a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-09-19 01:15:26.866695 | orchestrator | 2cdc0e06e10e registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-09-19 01:15:26.866706 | orchestrator | 46f1d60346c2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-09-19 01:15:26.866717 | orchestrator | a5f886d36952 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-09-19 01:15:26.866728 | orchestrator | 3a6baf5d2324 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-09-19 01:15:26.866745 | orchestrator | 6bb7009b4f88 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-09-19 01:15:26.866763 | orchestrator | 11e7033aa6b0 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-09-19 01:15:26.866779 | orchestrator | f53e2b1b2390 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-09-19 01:15:27.143953 | orchestrator | 2025-09-19 01:15:27.144042 | orchestrator | ## Images @ testbed-node-2 2025-09-19 01:15:27.144056 | orchestrator | 2025-09-19 01:15:27.144068 | orchestrator | + echo 2025-09-19 01:15:27.144080 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-19 01:15:27.144092 | orchestrator | + echo 2025-09-19 01:15:27.144103 | orchestrator | + osism container testbed-node-2 images 2025-09-19 01:15:29.372967 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 01:15:29.373052 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 01:15:29.373063 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 01:15:29.373072 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 01:15:29.373081 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 01:15:29.373090 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 01:15:29.373099 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 01:15:29.373107 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 01:15:29.373116 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 01:15:29.373124 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 01:15:29.373132 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 01:15:29.373141 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 01:15:29.373149 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 01:15:29.373157 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 01:15:29.373167 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 01:15:29.373175 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 01:15:29.373184 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 01:15:29.373192 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 01:15:29.373201 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 01:15:29.373209 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 01:15:29.373217 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 01:15:29.373247 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 01:15:29.373256 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 01:15:29.373265 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 01:15:29.373273 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 01:15:29.373281 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 01:15:29.373290 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 01:15:29.373298 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 01:15:29.373507 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 01:15:29.373525 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 01:15:29.373534 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 01:15:29.373543 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 01:15:29.373552 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 01:15:29.373560 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 01:15:29.373569 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 01:15:29.373578 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 01:15:29.373586 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 01:15:29.373595 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 01:15:29.373604 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 01:15:29.373612 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 01:15:29.373621 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 01:15:29.373629 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 01:15:29.373638 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 01:15:29.373647 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 01:15:29.373655 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 01:15:29.373664 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 01:15:29.373672 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 01:15:29.373691 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 01:15:29.373700 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 01:15:29.373709 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 01:15:29.373717 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 01:15:29.735908 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-19 01:15:29.743922 | orchestrator | + set -e 2025-09-19 01:15:29.743958 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 01:15:29.745701 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 01:15:29.745722 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 01:15:29.745733 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 01:15:29.745744 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 01:15:29.745756 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 01:15:29.745768 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 01:15:29.745779 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 01:15:29.745790 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 01:15:29.745822 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 01:15:29.745834 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 01:15:29.745845 | orchestrator | ++ export ARA=false 2025-09-19 01:15:29.745856 | orchestrator | ++ ARA=false 2025-09-19 01:15:29.745868 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 01:15:29.745879 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 01:15:29.745890 | orchestrator | ++ export TEMPEST=true 2025-09-19 01:15:29.745900 | orchestrator | ++ TEMPEST=true 2025-09-19 01:15:29.745911 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 01:15:29.745922 | orchestrator | ++ IS_ZUUL=true 2025-09-19 01:15:29.745932 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 01:15:29.745944 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 01:15:29.745954 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 01:15:29.745965 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 01:15:29.745976 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 01:15:29.745986 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 01:15:29.745997 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 01:15:29.746008 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 01:15:29.746085 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 01:15:29.746098 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 01:15:29.746109 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 01:15:29.746120 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-19 01:15:29.751968 | orchestrator | + set -e 2025-09-19 01:15:29.751992 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 01:15:29.752004 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 01:15:29.752016 | orchestrator | ++ INTERACTIVE=false 2025-09-19 01:15:29.752027 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 01:15:29.752038 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 01:15:29.752050 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 01:15:29.753129 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 01:15:29.758976 | orchestrator | 2025-09-19 01:15:29.759031 | orchestrator | # Ceph status 2025-09-19 01:15:29.759044 | orchestrator | 2025-09-19 01:15:29.759055 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 01:15:29.759072 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 01:15:29.759083 | orchestrator | + echo 2025-09-19 01:15:29.759095 | orchestrator | + echo '# Ceph status' 2025-09-19 01:15:29.759106 | orchestrator | + echo 2025-09-19 01:15:29.759117 | orchestrator | + ceph -s 2025-09-19 01:15:30.363706 | orchestrator | cluster: 2025-09-19 01:15:30.363805 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-19 01:15:30.363822 | orchestrator | health: HEALTH_OK 2025-09-19 01:15:30.363834 | orchestrator | 2025-09-19 01:15:30.363845 | orchestrator | services: 2025-09-19 01:15:30.363876 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-09-19 01:15:30.363895 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-0, testbed-node-1 2025-09-19 01:15:30.363907 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-19 01:15:30.363919 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-09-19 01:15:30.363957 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-19 01:15:30.363969 | orchestrator | 2025-09-19 01:15:30.363980 | orchestrator | data: 2025-09-19 01:15:30.363991 | orchestrator | volumes: 1/1 healthy 2025-09-19 01:15:30.364001 | orchestrator | pools: 14 pools, 401 pgs 2025-09-19 01:15:30.364012 | orchestrator | objects: 556 objects, 2.2 GiB 2025-09-19 01:15:30.364023 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-19 01:15:30.364034 | orchestrator | pgs: 401 active+clean 2025-09-19 01:15:30.364045 | orchestrator | 2025-09-19 01:15:30.407702 | orchestrator | 2025-09-19 01:15:30.407760 | orchestrator | # Ceph versions 2025-09-19 01:15:30.407772 | orchestrator | 2025-09-19 01:15:30.407784 | orchestrator | + echo 2025-09-19 01:15:30.407794 | orchestrator | + echo '# Ceph versions' 2025-09-19 01:15:30.407805 | orchestrator | + echo 2025-09-19 01:15:30.407816 | orchestrator | + ceph versions 2025-09-19 01:15:30.997807 | orchestrator | { 2025-09-19 01:15:30.997909 | orchestrator | "mon": { 2025-09-19 01:15:30.997923 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 01:15:30.997935 | orchestrator | }, 2025-09-19 01:15:30.997945 | orchestrator | "mgr": { 2025-09-19 01:15:30.997955 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 01:15:30.997964 | orchestrator | }, 2025-09-19 01:15:30.997974 | orchestrator | "osd": { 2025-09-19 01:15:30.997984 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-19 01:15:30.997993 | orchestrator | }, 2025-09-19 01:15:30.998003 | orchestrator | "mds": { 2025-09-19 01:15:30.998012 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 01:15:30.998069 | orchestrator | }, 2025-09-19 01:15:30.998079 | orchestrator | "rgw": { 2025-09-19 01:15:30.998089 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 01:15:30.998098 | orchestrator | }, 2025-09-19 01:15:30.998108 | orchestrator | "overall": { 2025-09-19 01:15:30.998118 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-19 01:15:30.998128 | orchestrator | } 2025-09-19 01:15:30.998137 | orchestrator | } 2025-09-19 01:15:31.049695 | orchestrator | 2025-09-19 01:15:31.049739 | orchestrator | # Ceph OSD tree 2025-09-19 01:15:31.049750 | orchestrator | 2025-09-19 01:15:31.049760 | orchestrator | + echo 2025-09-19 01:15:31.049770 | orchestrator | + echo '# Ceph OSD tree' 2025-09-19 01:15:31.049781 | orchestrator | + echo 2025-09-19 01:15:31.049790 | orchestrator | + ceph osd df tree 2025-09-19 01:15:31.606508 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-19 01:15:31.606620 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-09-19 01:15:31.606635 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-19 01:15:31.606646 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.61 0.95 190 up osd.0 2025-09-19 01:15:31.606657 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.22 1.05 202 up osd.4 2025-09-19 01:15:31.606668 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-19 01:15:31.606679 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 70 MiB 19 GiB 5.28 0.89 195 up osd.2 2025-09-19 01:15:31.606690 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.55 1.11 195 up osd.5 2025-09-19 01:15:31.606700 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-19 01:15:31.606711 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.86 1.16 184 up osd.1 2025-09-19 01:15:31.606721 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1017 MiB 947 MiB 1 KiB 70 MiB 19 GiB 4.97 0.84 204 up osd.3 2025-09-19 01:15:31.606757 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-09-19 01:15:31.606769 | orchestrator | MIN/MAX VAR: 0.84/1.16 STDDEV: 0.68 2025-09-19 01:15:31.653764 | orchestrator | 2025-09-19 01:15:31.653850 | orchestrator | # Ceph monitor status 2025-09-19 01:15:31.653863 | orchestrator | 2025-09-19 01:15:31.653873 | orchestrator | + echo 2025-09-19 01:15:31.653883 | orchestrator | + echo '# Ceph monitor status' 2025-09-19 01:15:31.653893 | orchestrator | + echo 2025-09-19 01:15:31.653903 | orchestrator | + ceph mon stat 2025-09-19 01:15:32.217779 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-19 01:15:32.262063 | orchestrator | 2025-09-19 01:15:32.262104 | orchestrator | # Ceph quorum status 2025-09-19 01:15:32.262118 | orchestrator | 2025-09-19 01:15:32.262129 | orchestrator | + echo 2025-09-19 01:15:32.262181 | orchestrator | + echo '# Ceph quorum status' 2025-09-19 01:15:32.262195 | orchestrator | + echo 2025-09-19 01:15:32.262265 | orchestrator | + ceph quorum_status 2025-09-19 01:15:32.262279 | orchestrator | + jq 2025-09-19 01:15:32.911896 | orchestrator | { 2025-09-19 01:15:32.911984 | orchestrator | "election_epoch": 6, 2025-09-19 01:15:32.911996 | orchestrator | "quorum": [ 2025-09-19 01:15:32.912006 | orchestrator | 0, 2025-09-19 01:15:32.912015 | orchestrator | 1, 2025-09-19 01:15:32.912023 | orchestrator | 2 2025-09-19 01:15:32.912032 | orchestrator | ], 2025-09-19 01:15:32.912041 | orchestrator | "quorum_names": [ 2025-09-19 01:15:32.912050 | orchestrator | "testbed-node-0", 2025-09-19 01:15:32.912059 | orchestrator | "testbed-node-1", 2025-09-19 01:15:32.912136 | orchestrator | "testbed-node-2" 2025-09-19 01:15:32.912147 | orchestrator | ], 2025-09-19 01:15:32.912157 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-19 01:15:32.912166 | orchestrator | "quorum_age": 1780, 2025-09-19 01:15:32.912175 | orchestrator | "features": { 2025-09-19 01:15:32.912184 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-19 01:15:32.912192 | orchestrator | "quorum_mon": [ 2025-09-19 01:15:32.912201 | orchestrator | "kraken", 2025-09-19 01:15:32.912215 | orchestrator | "luminous", 2025-09-19 01:15:32.912230 | orchestrator | "mimic", 2025-09-19 01:15:32.912244 | orchestrator | "osdmap-prune", 2025-09-19 01:15:32.912258 | orchestrator | "nautilus", 2025-09-19 01:15:32.912272 | orchestrator | "octopus", 2025-09-19 01:15:32.912286 | orchestrator | "pacific", 2025-09-19 01:15:32.912301 | orchestrator | "elector-pinging", 2025-09-19 01:15:32.912314 | orchestrator | "quincy", 2025-09-19 01:15:32.912328 | orchestrator | "reef" 2025-09-19 01:15:32.912340 | orchestrator | ] 2025-09-19 01:15:32.912354 | orchestrator | }, 2025-09-19 01:15:32.912368 | orchestrator | "monmap": { 2025-09-19 01:15:32.912383 | orchestrator | "epoch": 1, 2025-09-19 01:15:32.912397 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-19 01:15:32.912413 | orchestrator | "modified": "2025-09-19T00:45:31.674771Z", 2025-09-19 01:15:32.912427 | orchestrator | "created": "2025-09-19T00:45:31.674771Z", 2025-09-19 01:15:32.912442 | orchestrator | "min_mon_release": 18, 2025-09-19 01:15:32.912459 | orchestrator | "min_mon_release_name": "reef", 2025-09-19 01:15:32.912531 | orchestrator | "election_strategy": 1, 2025-09-19 01:15:32.912547 | orchestrator | "disallowed_leaders: ": "", 2025-09-19 01:15:32.912565 | orchestrator | "stretch_mode": false, 2025-09-19 01:15:32.912583 | orchestrator | "tiebreaker_mon": "", 2025-09-19 01:15:32.912599 | orchestrator | "removed_ranks: ": "", 2025-09-19 01:15:32.912616 | orchestrator | "features": { 2025-09-19 01:15:32.912627 | orchestrator | "persistent": [ 2025-09-19 01:15:32.912637 | orchestrator | "kraken", 2025-09-19 01:15:32.912648 | orchestrator | "luminous", 2025-09-19 01:15:32.912659 | orchestrator | "mimic", 2025-09-19 01:15:32.912669 | orchestrator | "osdmap-prune", 2025-09-19 01:15:32.912680 | orchestrator | "nautilus", 2025-09-19 01:15:32.912691 | orchestrator | "octopus", 2025-09-19 01:15:32.912703 | orchestrator | "pacific", 2025-09-19 01:15:32.912714 | orchestrator | "elector-pinging", 2025-09-19 01:15:32.912724 | orchestrator | "quincy", 2025-09-19 01:15:32.912735 | orchestrator | "reef" 2025-09-19 01:15:32.912746 | orchestrator | ], 2025-09-19 01:15:32.912757 | orchestrator | "optional": [] 2025-09-19 01:15:32.912768 | orchestrator | }, 2025-09-19 01:15:32.912779 | orchestrator | "mons": [ 2025-09-19 01:15:32.912790 | orchestrator | { 2025-09-19 01:15:32.912826 | orchestrator | "rank": 0, 2025-09-19 01:15:32.912836 | orchestrator | "name": "testbed-node-0", 2025-09-19 01:15:32.912846 | orchestrator | "public_addrs": { 2025-09-19 01:15:32.912855 | orchestrator | "addrvec": [ 2025-09-19 01:15:32.912865 | orchestrator | { 2025-09-19 01:15:32.912874 | orchestrator | "type": "v2", 2025-09-19 01:15:32.912883 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-19 01:15:32.912893 | orchestrator | "nonce": 0 2025-09-19 01:15:32.912902 | orchestrator | }, 2025-09-19 01:15:32.912911 | orchestrator | { 2025-09-19 01:15:32.912921 | orchestrator | "type": "v1", 2025-09-19 01:15:32.912930 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-19 01:15:32.912939 | orchestrator | "nonce": 0 2025-09-19 01:15:32.912949 | orchestrator | } 2025-09-19 01:15:32.912958 | orchestrator | ] 2025-09-19 01:15:32.912967 | orchestrator | }, 2025-09-19 01:15:32.912977 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-19 01:15:32.912986 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-19 01:15:32.912996 | orchestrator | "priority": 0, 2025-09-19 01:15:32.913005 | orchestrator | "weight": 0, 2025-09-19 01:15:32.913014 | orchestrator | "crush_location": "{}" 2025-09-19 01:15:32.913023 | orchestrator | }, 2025-09-19 01:15:32.913033 | orchestrator | { 2025-09-19 01:15:32.913042 | orchestrator | "rank": 1, 2025-09-19 01:15:32.913051 | orchestrator | "name": "testbed-node-1", 2025-09-19 01:15:32.913061 | orchestrator | "public_addrs": { 2025-09-19 01:15:32.913070 | orchestrator | "addrvec": [ 2025-09-19 01:15:32.913079 | orchestrator | { 2025-09-19 01:15:32.913089 | orchestrator | "type": "v2", 2025-09-19 01:15:32.913098 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-19 01:15:32.913107 | orchestrator | "nonce": 0 2025-09-19 01:15:32.913117 | orchestrator | }, 2025-09-19 01:15:32.913126 | orchestrator | { 2025-09-19 01:15:32.913135 | orchestrator | "type": "v1", 2025-09-19 01:15:32.913144 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-19 01:15:32.913154 | orchestrator | "nonce": 0 2025-09-19 01:15:32.913163 | orchestrator | } 2025-09-19 01:15:32.913172 | orchestrator | ] 2025-09-19 01:15:32.913182 | orchestrator | }, 2025-09-19 01:15:32.913191 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-19 01:15:32.913200 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-19 01:15:32.913210 | orchestrator | "priority": 0, 2025-09-19 01:15:32.913219 | orchestrator | "weight": 0, 2025-09-19 01:15:32.913229 | orchestrator | "crush_location": "{}" 2025-09-19 01:15:32.913245 | orchestrator | }, 2025-09-19 01:15:32.913261 | orchestrator | { 2025-09-19 01:15:32.913277 | orchestrator | "rank": 2, 2025-09-19 01:15:32.913292 | orchestrator | "name": "testbed-node-2", 2025-09-19 01:15:32.913308 | orchestrator | "public_addrs": { 2025-09-19 01:15:32.913324 | orchestrator | "addrvec": [ 2025-09-19 01:15:32.913340 | orchestrator | { 2025-09-19 01:15:32.913356 | orchestrator | "type": "v2", 2025-09-19 01:15:32.913375 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-19 01:15:32.913390 | orchestrator | "nonce": 0 2025-09-19 01:15:32.913405 | orchestrator | }, 2025-09-19 01:15:32.913414 | orchestrator | { 2025-09-19 01:15:32.913424 | orchestrator | "type": "v1", 2025-09-19 01:15:32.913433 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-19 01:15:32.913443 | orchestrator | "nonce": 0 2025-09-19 01:15:32.913452 | orchestrator | } 2025-09-19 01:15:32.913461 | orchestrator | ] 2025-09-19 01:15:32.913499 | orchestrator | }, 2025-09-19 01:15:32.913509 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-19 01:15:32.913519 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-19 01:15:32.913528 | orchestrator | "priority": 0, 2025-09-19 01:15:32.913538 | orchestrator | "weight": 0, 2025-09-19 01:15:32.913547 | orchestrator | "crush_location": "{}" 2025-09-19 01:15:32.913556 | orchestrator | } 2025-09-19 01:15:32.913571 | orchestrator | ] 2025-09-19 01:15:32.913587 | orchestrator | } 2025-09-19 01:15:32.913603 | orchestrator | } 2025-09-19 01:15:32.913826 | orchestrator | 2025-09-19 01:15:32.913853 | orchestrator | # Ceph free space status 2025-09-19 01:15:32.913863 | orchestrator | 2025-09-19 01:15:32.913873 | orchestrator | + echo 2025-09-19 01:15:32.913883 | orchestrator | + echo '# Ceph free space status' 2025-09-19 01:15:32.913892 | orchestrator | + echo 2025-09-19 01:15:32.913902 | orchestrator | + ceph df 2025-09-19 01:15:33.542902 | orchestrator | --- RAW STORAGE --- 2025-09-19 01:15:33.543001 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-19 01:15:33.543055 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 01:15:33.543068 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 01:15:33.543080 | orchestrator | 2025-09-19 01:15:33.543092 | orchestrator | --- POOLS --- 2025-09-19 01:15:33.543104 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-19 01:15:33.543116 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-19 01:15:33.543128 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-19 01:15:33.543139 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-19 01:15:33.543150 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-19 01:15:33.543172 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-19 01:15:33.543184 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-19 01:15:33.543195 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2025-09-19 01:15:33.543207 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-19 01:15:33.543218 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-09-19 01:15:33.543229 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 01:15:33.543240 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 01:15:33.543251 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2025-09-19 01:15:33.543263 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 01:15:33.543274 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 01:15:33.589907 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 01:15:33.655514 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 01:15:33.655589 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-19 01:15:33.655600 | orchestrator | + osism apply facts 2025-09-19 01:15:45.636660 | orchestrator | 2025-09-19 01:15:45 | INFO  | Task 14417cdd-c44d-44d1-bf46-f1b999f3c09e (facts) was prepared for execution. 2025-09-19 01:15:45.636762 | orchestrator | 2025-09-19 01:15:45 | INFO  | It takes a moment until task 14417cdd-c44d-44d1-bf46-f1b999f3c09e (facts) has been started and output is visible here. 2025-09-19 01:15:59.484337 | orchestrator | 2025-09-19 01:15:59.484480 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 01:15:59.484526 | orchestrator | 2025-09-19 01:15:59.484541 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 01:15:59.484552 | orchestrator | Friday 19 September 2025 01:15:49 +0000 (0:00:00.295) 0:00:00.295 ****** 2025-09-19 01:15:59.484564 | orchestrator | ok: [testbed-manager] 2025-09-19 01:15:59.484577 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:15:59.484588 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:15:59.484598 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:15:59.484609 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:15:59.484620 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:15:59.484631 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:15:59.484642 | orchestrator | 2025-09-19 01:15:59.484653 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 01:15:59.484664 | orchestrator | Friday 19 September 2025 01:15:51 +0000 (0:00:01.575) 0:00:01.871 ****** 2025-09-19 01:15:59.484675 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:15:59.484686 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:15:59.484697 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:15:59.484708 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:15:59.484719 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:15:59.484729 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:15:59.484740 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:15:59.484751 | orchestrator | 2025-09-19 01:15:59.484783 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 01:15:59.484795 | orchestrator | 2025-09-19 01:15:59.484806 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 01:15:59.484816 | orchestrator | Friday 19 September 2025 01:15:52 +0000 (0:00:01.272) 0:00:03.144 ****** 2025-09-19 01:15:59.484827 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:15:59.484838 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:15:59.484849 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:15:59.484861 | orchestrator | ok: [testbed-manager] 2025-09-19 01:15:59.484874 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:15:59.484886 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:15:59.484898 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:15:59.484911 | orchestrator | 2025-09-19 01:15:59.484924 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 01:15:59.484936 | orchestrator | 2025-09-19 01:15:59.484947 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 01:15:59.484958 | orchestrator | Friday 19 September 2025 01:15:58 +0000 (0:00:05.618) 0:00:08.762 ****** 2025-09-19 01:15:59.484969 | orchestrator | skipping: [testbed-manager] 2025-09-19 01:15:59.484979 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:15:59.484990 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:15:59.485001 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:15:59.485012 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:15:59.485022 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:15:59.485033 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:15:59.485044 | orchestrator | 2025-09-19 01:15:59.485055 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:15:59.485066 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485078 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485089 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485100 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485111 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485122 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485132 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:15:59.485150 | orchestrator | 2025-09-19 01:15:59.485168 | orchestrator | 2025-09-19 01:15:59.485186 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:15:59.485205 | orchestrator | Friday 19 September 2025 01:15:59 +0000 (0:00:00.573) 0:00:09.335 ****** 2025-09-19 01:15:59.485225 | orchestrator | =============================================================================== 2025-09-19 01:15:59.485245 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.62s 2025-09-19 01:15:59.485260 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.58s 2025-09-19 01:15:59.485279 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-09-19 01:15:59.485297 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-09-19 01:15:59.769204 | orchestrator | + osism validate ceph-mons 2025-09-19 01:16:22.546594 | orchestrator | 2025-09-19 01:16:22.546702 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-19 01:16:22.546719 | orchestrator | 2025-09-19 01:16:22.546755 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 01:16:22.546767 | orchestrator | Friday 19 September 2025 01:16:05 +0000 (0:00:00.432) 0:00:00.432 ****** 2025-09-19 01:16:22.546779 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:22.546790 | orchestrator | 2025-09-19 01:16:22.546800 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 01:16:22.546811 | orchestrator | Friday 19 September 2025 01:16:06 +0000 (0:00:00.671) 0:00:01.103 ****** 2025-09-19 01:16:22.546822 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:22.546833 | orchestrator | 2025-09-19 01:16:22.546844 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 01:16:22.546855 | orchestrator | Friday 19 September 2025 01:16:07 +0000 (0:00:00.901) 0:00:02.005 ****** 2025-09-19 01:16:22.546866 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.546877 | orchestrator | 2025-09-19 01:16:22.546888 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 01:16:22.546899 | orchestrator | Friday 19 September 2025 01:16:07 +0000 (0:00:00.239) 0:00:02.245 ****** 2025-09-19 01:16:22.546910 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.546920 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:22.546931 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:22.546942 | orchestrator | 2025-09-19 01:16:22.546953 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 01:16:22.546963 | orchestrator | Friday 19 September 2025 01:16:08 +0000 (0:00:00.289) 0:00:02.534 ****** 2025-09-19 01:16:22.546974 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:22.546986 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:22.546997 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547007 | orchestrator | 2025-09-19 01:16:22.547018 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 01:16:22.547029 | orchestrator | Friday 19 September 2025 01:16:09 +0000 (0:00:00.997) 0:00:03.532 ****** 2025-09-19 01:16:22.547040 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547051 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:16:22.547062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:16:22.547072 | orchestrator | 2025-09-19 01:16:22.547083 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 01:16:22.547094 | orchestrator | Friday 19 September 2025 01:16:09 +0000 (0:00:00.322) 0:00:03.855 ****** 2025-09-19 01:16:22.547107 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547120 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:22.547132 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:22.547144 | orchestrator | 2025-09-19 01:16:22.547156 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:16:22.547170 | orchestrator | Friday 19 September 2025 01:16:09 +0000 (0:00:00.484) 0:00:04.340 ****** 2025-09-19 01:16:22.547182 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547194 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:22.547206 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:22.547219 | orchestrator | 2025-09-19 01:16:22.547232 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-19 01:16:22.547244 | orchestrator | Friday 19 September 2025 01:16:10 +0000 (0:00:00.339) 0:00:04.679 ****** 2025-09-19 01:16:22.547257 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547269 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:16:22.547282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:16:22.547294 | orchestrator | 2025-09-19 01:16:22.547307 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-19 01:16:22.547319 | orchestrator | Friday 19 September 2025 01:16:10 +0000 (0:00:00.316) 0:00:04.995 ****** 2025-09-19 01:16:22.547331 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547344 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:22.547355 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:22.547368 | orchestrator | 2025-09-19 01:16:22.547380 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 01:16:22.547400 | orchestrator | Friday 19 September 2025 01:16:10 +0000 (0:00:00.292) 0:00:05.288 ****** 2025-09-19 01:16:22.547412 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547424 | orchestrator | 2025-09-19 01:16:22.547436 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 01:16:22.547449 | orchestrator | Friday 19 September 2025 01:16:11 +0000 (0:00:00.226) 0:00:05.514 ****** 2025-09-19 01:16:22.547460 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547471 | orchestrator | 2025-09-19 01:16:22.547482 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 01:16:22.547493 | orchestrator | Friday 19 September 2025 01:16:11 +0000 (0:00:00.710) 0:00:06.225 ****** 2025-09-19 01:16:22.547503 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547514 | orchestrator | 2025-09-19 01:16:22.547525 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:22.547569 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.285) 0:00:06.510 ****** 2025-09-19 01:16:22.547579 | orchestrator | 2025-09-19 01:16:22.547590 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:22.547601 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.069) 0:00:06.580 ****** 2025-09-19 01:16:22.547612 | orchestrator | 2025-09-19 01:16:22.547622 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:22.547633 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.073) 0:00:06.654 ****** 2025-09-19 01:16:22.547644 | orchestrator | 2025-09-19 01:16:22.547655 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 01:16:22.547666 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.073) 0:00:06.727 ****** 2025-09-19 01:16:22.547676 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547687 | orchestrator | 2025-09-19 01:16:22.547698 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 01:16:22.547709 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.253) 0:00:06.981 ****** 2025-09-19 01:16:22.547719 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547730 | orchestrator | 2025-09-19 01:16:22.547757 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-19 01:16:22.547768 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.230) 0:00:07.211 ****** 2025-09-19 01:16:22.547779 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547790 | orchestrator | 2025-09-19 01:16:22.547801 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-19 01:16:22.547811 | orchestrator | Friday 19 September 2025 01:16:12 +0000 (0:00:00.170) 0:00:07.382 ****** 2025-09-19 01:16:22.547839 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:16:22.547850 | orchestrator | 2025-09-19 01:16:22.547861 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-19 01:16:22.547876 | orchestrator | Friday 19 September 2025 01:16:14 +0000 (0:00:01.579) 0:00:08.961 ****** 2025-09-19 01:16:22.547887 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547898 | orchestrator | 2025-09-19 01:16:22.547909 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-19 01:16:22.547920 | orchestrator | Friday 19 September 2025 01:16:14 +0000 (0:00:00.348) 0:00:09.309 ****** 2025-09-19 01:16:22.547931 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.547941 | orchestrator | 2025-09-19 01:16:22.547952 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-19 01:16:22.547963 | orchestrator | Friday 19 September 2025 01:16:14 +0000 (0:00:00.129) 0:00:09.439 ****** 2025-09-19 01:16:22.547973 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.547984 | orchestrator | 2025-09-19 01:16:22.547994 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-19 01:16:22.548005 | orchestrator | Friday 19 September 2025 01:16:15 +0000 (0:00:00.331) 0:00:09.770 ****** 2025-09-19 01:16:22.548023 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.548034 | orchestrator | 2025-09-19 01:16:22.548045 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-19 01:16:22.548056 | orchestrator | Friday 19 September 2025 01:16:16 +0000 (0:00:00.812) 0:00:10.583 ****** 2025-09-19 01:16:22.548066 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.548077 | orchestrator | 2025-09-19 01:16:22.548088 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-19 01:16:22.548098 | orchestrator | Friday 19 September 2025 01:16:16 +0000 (0:00:00.115) 0:00:10.699 ****** 2025-09-19 01:16:22.548109 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.548120 | orchestrator | 2025-09-19 01:16:22.548130 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-19 01:16:22.548141 | orchestrator | Friday 19 September 2025 01:16:16 +0000 (0:00:00.138) 0:00:10.837 ****** 2025-09-19 01:16:22.548151 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.548162 | orchestrator | 2025-09-19 01:16:22.548173 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-19 01:16:22.548183 | orchestrator | Friday 19 September 2025 01:16:16 +0000 (0:00:00.129) 0:00:10.967 ****** 2025-09-19 01:16:22.548194 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:16:22.548205 | orchestrator | 2025-09-19 01:16:22.548215 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-19 01:16:22.548226 | orchestrator | Friday 19 September 2025 01:16:17 +0000 (0:00:01.289) 0:00:12.257 ****** 2025-09-19 01:16:22.548237 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.548247 | orchestrator | 2025-09-19 01:16:22.548258 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-19 01:16:22.548269 | orchestrator | Friday 19 September 2025 01:16:18 +0000 (0:00:00.328) 0:00:12.585 ****** 2025-09-19 01:16:22.548280 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.548290 | orchestrator | 2025-09-19 01:16:22.548301 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-19 01:16:22.548312 | orchestrator | Friday 19 September 2025 01:16:18 +0000 (0:00:00.140) 0:00:12.726 ****** 2025-09-19 01:16:22.548322 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:22.548333 | orchestrator | 2025-09-19 01:16:22.548344 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-19 01:16:22.548354 | orchestrator | Friday 19 September 2025 01:16:18 +0000 (0:00:00.175) 0:00:12.901 ****** 2025-09-19 01:16:22.548365 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.548375 | orchestrator | 2025-09-19 01:16:22.548386 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-19 01:16:22.548397 | orchestrator | Friday 19 September 2025 01:16:18 +0000 (0:00:00.143) 0:00:13.045 ****** 2025-09-19 01:16:22.548407 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.548418 | orchestrator | 2025-09-19 01:16:22.548429 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 01:16:22.548440 | orchestrator | Friday 19 September 2025 01:16:18 +0000 (0:00:00.142) 0:00:13.187 ****** 2025-09-19 01:16:22.548451 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:22.548461 | orchestrator | 2025-09-19 01:16:22.548472 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 01:16:22.548483 | orchestrator | Friday 19 September 2025 01:16:19 +0000 (0:00:00.628) 0:00:13.817 ****** 2025-09-19 01:16:22.548494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:22.548504 | orchestrator | 2025-09-19 01:16:22.548515 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 01:16:22.548542 | orchestrator | Friday 19 September 2025 01:16:20 +0000 (0:00:00.766) 0:00:14.584 ****** 2025-09-19 01:16:22.548553 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:22.548564 | orchestrator | 2025-09-19 01:16:22.548620 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 01:16:22.548633 | orchestrator | Friday 19 September 2025 01:16:21 +0000 (0:00:01.686) 0:00:16.271 ****** 2025-09-19 01:16:22.548651 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:22.548662 | orchestrator | 2025-09-19 01:16:22.548673 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 01:16:22.548684 | orchestrator | Friday 19 September 2025 01:16:22 +0000 (0:00:00.278) 0:00:16.550 ****** 2025-09-19 01:16:22.548694 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:22.548705 | orchestrator | 2025-09-19 01:16:22.548723 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:24.776807 | orchestrator | Friday 19 September 2025 01:16:22 +0000 (0:00:00.243) 0:00:16.794 ****** 2025-09-19 01:16:24.776915 | orchestrator | 2025-09-19 01:16:24.776933 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:24.776945 | orchestrator | Friday 19 September 2025 01:16:22 +0000 (0:00:00.066) 0:00:16.860 ****** 2025-09-19 01:16:24.776956 | orchestrator | 2025-09-19 01:16:24.776967 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:24.776978 | orchestrator | Friday 19 September 2025 01:16:22 +0000 (0:00:00.066) 0:00:16.926 ****** 2025-09-19 01:16:24.776988 | orchestrator | 2025-09-19 01:16:24.777018 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 01:16:24.777029 | orchestrator | Friday 19 September 2025 01:16:22 +0000 (0:00:00.071) 0:00:16.998 ****** 2025-09-19 01:16:24.777040 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:24.777051 | orchestrator | 2025-09-19 01:16:24.777061 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 01:16:24.777072 | orchestrator | Friday 19 September 2025 01:16:23 +0000 (0:00:01.388) 0:00:18.387 ****** 2025-09-19 01:16:24.777082 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 01:16:24.777093 | orchestrator |  "msg": [ 2025-09-19 01:16:24.777105 | orchestrator |  "Validator run completed.", 2025-09-19 01:16:24.777116 | orchestrator |  "You can find the report file here:", 2025-09-19 01:16:24.777127 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-19T01:16:06+00:00-report.json", 2025-09-19 01:16:24.777139 | orchestrator |  "on the following host:", 2025-09-19 01:16:24.777150 | orchestrator |  "testbed-manager" 2025-09-19 01:16:24.777160 | orchestrator |  ] 2025-09-19 01:16:24.777172 | orchestrator | } 2025-09-19 01:16:24.777183 | orchestrator | 2025-09-19 01:16:24.777193 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:16:24.777206 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 01:16:24.777219 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:16:24.777231 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:16:24.777242 | orchestrator | 2025-09-19 01:16:24.777253 | orchestrator | 2025-09-19 01:16:24.777268 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:16:24.777278 | orchestrator | Friday 19 September 2025 01:16:24 +0000 (0:00:00.420) 0:00:18.807 ****** 2025-09-19 01:16:24.777289 | orchestrator | =============================================================================== 2025-09-19 01:16:24.777300 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2025-09-19 01:16:24.777310 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2025-09-19 01:16:24.777321 | orchestrator | Write report file ------------------------------------------------------- 1.39s 2025-09-19 01:16:24.777334 | orchestrator | Gather status data ------------------------------------------------------ 1.29s 2025-09-19 01:16:24.777347 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-09-19 01:16:24.777385 | orchestrator | Create report output directory ------------------------------------------ 0.90s 2025-09-19 01:16:24.777398 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.81s 2025-09-19 01:16:24.777410 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.77s 2025-09-19 01:16:24.777422 | orchestrator | Aggregate test results step two ----------------------------------------- 0.71s 2025-09-19 01:16:24.777434 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-09-19 01:16:24.777446 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.63s 2025-09-19 01:16:24.777458 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-09-19 01:16:24.777470 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-09-19 01:16:24.777482 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-09-19 01:16:24.777495 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2025-09-19 01:16:24.777507 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-09-19 01:16:24.777520 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2025-09-19 01:16:24.777559 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2025-09-19 01:16:24.777571 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2025-09-19 01:16:24.777583 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-09-19 01:16:25.098731 | orchestrator | + osism validate ceph-mgrs 2025-09-19 01:16:57.374941 | orchestrator | 2025-09-19 01:16:57.375079 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-19 01:16:57.375105 | orchestrator | 2025-09-19 01:16:57.375122 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 01:16:57.375142 | orchestrator | Friday 19 September 2025 01:16:41 +0000 (0:00:00.432) 0:00:00.432 ****** 2025-09-19 01:16:57.375161 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.375201 | orchestrator | 2025-09-19 01:16:57.375224 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 01:16:57.375244 | orchestrator | Friday 19 September 2025 01:16:42 +0000 (0:00:00.666) 0:00:01.098 ****** 2025-09-19 01:16:57.375262 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.375273 | orchestrator | 2025-09-19 01:16:57.375284 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 01:16:57.375295 | orchestrator | Friday 19 September 2025 01:16:43 +0000 (0:00:00.880) 0:00:01.979 ****** 2025-09-19 01:16:57.375306 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.375318 | orchestrator | 2025-09-19 01:16:57.375329 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 01:16:57.375339 | orchestrator | Friday 19 September 2025 01:16:43 +0000 (0:00:00.261) 0:00:02.241 ****** 2025-09-19 01:16:57.375350 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.375361 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:57.375372 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:57.375382 | orchestrator | 2025-09-19 01:16:57.375393 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 01:16:57.375404 | orchestrator | Friday 19 September 2025 01:16:43 +0000 (0:00:00.287) 0:00:02.529 ****** 2025-09-19 01:16:57.375416 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:57.375427 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.375437 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:57.375448 | orchestrator | 2025-09-19 01:16:57.375459 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 01:16:57.375472 | orchestrator | Friday 19 September 2025 01:16:45 +0000 (0:00:01.054) 0:00:03.584 ****** 2025-09-19 01:16:57.375485 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.375498 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:16:57.375535 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:16:57.375549 | orchestrator | 2025-09-19 01:16:57.375562 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 01:16:57.375597 | orchestrator | Friday 19 September 2025 01:16:45 +0000 (0:00:00.304) 0:00:03.888 ****** 2025-09-19 01:16:57.375609 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.375622 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:57.375632 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:57.375643 | orchestrator | 2025-09-19 01:16:57.375653 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:16:57.375664 | orchestrator | Friday 19 September 2025 01:16:45 +0000 (0:00:00.558) 0:00:04.447 ****** 2025-09-19 01:16:57.375675 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.375685 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:57.375696 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:57.375706 | orchestrator | 2025-09-19 01:16:57.375717 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-19 01:16:57.375727 | orchestrator | Friday 19 September 2025 01:16:46 +0000 (0:00:00.334) 0:00:04.781 ****** 2025-09-19 01:16:57.375738 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.375749 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:16:57.375759 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:16:57.375770 | orchestrator | 2025-09-19 01:16:57.375780 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-19 01:16:57.375791 | orchestrator | Friday 19 September 2025 01:16:46 +0000 (0:00:00.333) 0:00:05.114 ****** 2025-09-19 01:16:57.375802 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.375812 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:16:57.375823 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:16:57.375834 | orchestrator | 2025-09-19 01:16:57.375845 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 01:16:57.375855 | orchestrator | Friday 19 September 2025 01:16:46 +0000 (0:00:00.321) 0:00:05.435 ****** 2025-09-19 01:16:57.375866 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.375876 | orchestrator | 2025-09-19 01:16:57.375887 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 01:16:57.375898 | orchestrator | Friday 19 September 2025 01:16:47 +0000 (0:00:00.250) 0:00:05.686 ****** 2025-09-19 01:16:57.375926 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.375937 | orchestrator | 2025-09-19 01:16:57.375948 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 01:16:57.375959 | orchestrator | Friday 19 September 2025 01:16:47 +0000 (0:00:00.793) 0:00:06.479 ****** 2025-09-19 01:16:57.375970 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.375980 | orchestrator | 2025-09-19 01:16:57.375991 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:57.376001 | orchestrator | Friday 19 September 2025 01:16:48 +0000 (0:00:00.258) 0:00:06.738 ****** 2025-09-19 01:16:57.376012 | orchestrator | 2025-09-19 01:16:57.376023 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:57.376033 | orchestrator | Friday 19 September 2025 01:16:48 +0000 (0:00:00.069) 0:00:06.808 ****** 2025-09-19 01:16:57.376044 | orchestrator | 2025-09-19 01:16:57.376055 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:57.376065 | orchestrator | Friday 19 September 2025 01:16:48 +0000 (0:00:00.070) 0:00:06.878 ****** 2025-09-19 01:16:57.376076 | orchestrator | 2025-09-19 01:16:57.376087 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 01:16:57.376097 | orchestrator | Friday 19 September 2025 01:16:48 +0000 (0:00:00.074) 0:00:06.953 ****** 2025-09-19 01:16:57.376108 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.376119 | orchestrator | 2025-09-19 01:16:57.376129 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 01:16:57.376140 | orchestrator | Friday 19 September 2025 01:16:48 +0000 (0:00:00.260) 0:00:07.213 ****** 2025-09-19 01:16:57.376159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.376170 | orchestrator | 2025-09-19 01:16:57.376199 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-19 01:16:57.376211 | orchestrator | Friday 19 September 2025 01:16:48 +0000 (0:00:00.271) 0:00:07.485 ****** 2025-09-19 01:16:57.376222 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.376233 | orchestrator | 2025-09-19 01:16:57.376243 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-19 01:16:57.376254 | orchestrator | Friday 19 September 2025 01:16:49 +0000 (0:00:00.149) 0:00:07.634 ****** 2025-09-19 01:16:57.376265 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:16:57.376275 | orchestrator | 2025-09-19 01:16:57.376286 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-19 01:16:57.376297 | orchestrator | Friday 19 September 2025 01:16:51 +0000 (0:00:01.939) 0:00:09.574 ****** 2025-09-19 01:16:57.376307 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.376318 | orchestrator | 2025-09-19 01:16:57.376329 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-19 01:16:57.376340 | orchestrator | Friday 19 September 2025 01:16:51 +0000 (0:00:00.264) 0:00:09.839 ****** 2025-09-19 01:16:57.376350 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.376361 | orchestrator | 2025-09-19 01:16:57.376372 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-19 01:16:57.376382 | orchestrator | Friday 19 September 2025 01:16:51 +0000 (0:00:00.323) 0:00:10.162 ****** 2025-09-19 01:16:57.376398 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.376409 | orchestrator | 2025-09-19 01:16:57.376420 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-19 01:16:57.376431 | orchestrator | Friday 19 September 2025 01:16:51 +0000 (0:00:00.389) 0:00:10.552 ****** 2025-09-19 01:16:57.376442 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:16:57.376452 | orchestrator | 2025-09-19 01:16:57.376463 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 01:16:57.376474 | orchestrator | Friday 19 September 2025 01:16:52 +0000 (0:00:00.169) 0:00:10.721 ****** 2025-09-19 01:16:57.376485 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.376495 | orchestrator | 2025-09-19 01:16:57.376506 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 01:16:57.376517 | orchestrator | Friday 19 September 2025 01:16:52 +0000 (0:00:00.281) 0:00:11.003 ****** 2025-09-19 01:16:57.376528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:16:57.376538 | orchestrator | 2025-09-19 01:16:57.376549 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 01:16:57.376559 | orchestrator | Friday 19 September 2025 01:16:52 +0000 (0:00:00.238) 0:00:11.241 ****** 2025-09-19 01:16:57.376636 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.376651 | orchestrator | 2025-09-19 01:16:57.376662 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 01:16:57.376673 | orchestrator | Friday 19 September 2025 01:16:53 +0000 (0:00:01.256) 0:00:12.497 ****** 2025-09-19 01:16:57.376684 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.376695 | orchestrator | 2025-09-19 01:16:57.376706 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 01:16:57.376717 | orchestrator | Friday 19 September 2025 01:16:54 +0000 (0:00:00.298) 0:00:12.796 ****** 2025-09-19 01:16:57.376727 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.376738 | orchestrator | 2025-09-19 01:16:57.376749 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:57.376759 | orchestrator | Friday 19 September 2025 01:16:54 +0000 (0:00:00.255) 0:00:13.051 ****** 2025-09-19 01:16:57.376770 | orchestrator | 2025-09-19 01:16:57.376781 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:57.376792 | orchestrator | Friday 19 September 2025 01:16:54 +0000 (0:00:00.068) 0:00:13.120 ****** 2025-09-19 01:16:57.376810 | orchestrator | 2025-09-19 01:16:57.376821 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:16:57.376832 | orchestrator | Friday 19 September 2025 01:16:54 +0000 (0:00:00.072) 0:00:13.192 ****** 2025-09-19 01:16:57.376842 | orchestrator | 2025-09-19 01:16:57.376853 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 01:16:57.376864 | orchestrator | Friday 19 September 2025 01:16:54 +0000 (0:00:00.114) 0:00:13.306 ****** 2025-09-19 01:16:57.376875 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 01:16:57.376885 | orchestrator | 2025-09-19 01:16:57.376896 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 01:16:57.376907 | orchestrator | Friday 19 September 2025 01:16:56 +0000 (0:00:01.676) 0:00:14.983 ****** 2025-09-19 01:16:57.376918 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 01:16:57.376929 | orchestrator |  "msg": [ 2025-09-19 01:16:57.376940 | orchestrator |  "Validator run completed.", 2025-09-19 01:16:57.376951 | orchestrator |  "You can find the report file here:", 2025-09-19 01:16:57.376962 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-19T01:16:42+00:00-report.json", 2025-09-19 01:16:57.376974 | orchestrator |  "on the following host:", 2025-09-19 01:16:57.376985 | orchestrator |  "testbed-manager" 2025-09-19 01:16:57.376996 | orchestrator |  ] 2025-09-19 01:16:57.377007 | orchestrator | } 2025-09-19 01:16:57.377018 | orchestrator | 2025-09-19 01:16:57.377029 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:16:57.377041 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 01:16:57.377053 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:16:57.377073 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:16:57.791852 | orchestrator | 2025-09-19 01:16:57.791948 | orchestrator | 2025-09-19 01:16:57.791962 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:16:57.791976 | orchestrator | Friday 19 September 2025 01:16:57 +0000 (0:00:00.924) 0:00:15.908 ****** 2025-09-19 01:16:57.791987 | orchestrator | =============================================================================== 2025-09-19 01:16:57.791998 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.94s 2025-09-19 01:16:57.792009 | orchestrator | Write report file ------------------------------------------------------- 1.68s 2025-09-19 01:16:57.792020 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2025-09-19 01:16:57.792031 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2025-09-19 01:16:57.792042 | orchestrator | Print report file information ------------------------------------------- 0.92s 2025-09-19 01:16:57.792052 | orchestrator | Create report output directory ------------------------------------------ 0.88s 2025-09-19 01:16:57.792063 | orchestrator | Aggregate test results step two ----------------------------------------- 0.79s 2025-09-19 01:16:57.792074 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-09-19 01:16:57.792103 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2025-09-19 01:16:57.792114 | orchestrator | Fail test if mgr modules are disabled that should be enabled ------------ 0.39s 2025-09-19 01:16:57.792125 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-09-19 01:16:57.792136 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2025-09-19 01:16:57.792146 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2025-09-19 01:16:57.792157 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-09-19 01:16:57.792188 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-09-19 01:16:57.792199 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-09-19 01:16:57.792210 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-09-19 01:16:57.792220 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-09-19 01:16:57.792231 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2025-09-19 01:16:57.792242 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-09-19 01:16:58.090424 | orchestrator | + osism validate ceph-osds 2025-09-19 01:17:18.961800 | orchestrator | 2025-09-19 01:17:18.961875 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-19 01:17:18.961881 | orchestrator | 2025-09-19 01:17:18.961885 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 01:17:18.961890 | orchestrator | Friday 19 September 2025 01:17:14 +0000 (0:00:00.421) 0:00:00.421 ****** 2025-09-19 01:17:18.961895 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:18.961899 | orchestrator | 2025-09-19 01:17:18.961903 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 01:17:18.961907 | orchestrator | Friday 19 September 2025 01:17:15 +0000 (0:00:00.675) 0:00:01.097 ****** 2025-09-19 01:17:18.961911 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:18.961915 | orchestrator | 2025-09-19 01:17:18.961919 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 01:17:18.961923 | orchestrator | Friday 19 September 2025 01:17:15 +0000 (0:00:00.326) 0:00:01.423 ****** 2025-09-19 01:17:18.961926 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:18.961930 | orchestrator | 2025-09-19 01:17:18.961934 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 01:17:18.961938 | orchestrator | Friday 19 September 2025 01:17:16 +0000 (0:00:00.988) 0:00:02.412 ****** 2025-09-19 01:17:18.961942 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:18.961946 | orchestrator | 2025-09-19 01:17:18.961950 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 01:17:18.961954 | orchestrator | Friday 19 September 2025 01:17:16 +0000 (0:00:00.129) 0:00:02.542 ****** 2025-09-19 01:17:18.961958 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:18.961962 | orchestrator | 2025-09-19 01:17:18.961965 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 01:17:18.961969 | orchestrator | Friday 19 September 2025 01:17:16 +0000 (0:00:00.119) 0:00:02.661 ****** 2025-09-19 01:17:18.961973 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:18.961976 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:18.961980 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:18.961984 | orchestrator | 2025-09-19 01:17:18.961988 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 01:17:18.961991 | orchestrator | Friday 19 September 2025 01:17:17 +0000 (0:00:00.315) 0:00:02.977 ****** 2025-09-19 01:17:18.961995 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:18.961999 | orchestrator | 2025-09-19 01:17:18.962002 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 01:17:18.962006 | orchestrator | Friday 19 September 2025 01:17:17 +0000 (0:00:00.143) 0:00:03.120 ****** 2025-09-19 01:17:18.962010 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:18.962055 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:18.962059 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:18.962063 | orchestrator | 2025-09-19 01:17:18.962067 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-19 01:17:18.962071 | orchestrator | Friday 19 September 2025 01:17:17 +0000 (0:00:00.310) 0:00:03.430 ****** 2025-09-19 01:17:18.962075 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:18.962093 | orchestrator | 2025-09-19 01:17:18.962097 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:17:18.962101 | orchestrator | Friday 19 September 2025 01:17:18 +0000 (0:00:00.558) 0:00:03.989 ****** 2025-09-19 01:17:18.962104 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:18.962108 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:18.962112 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:18.962115 | orchestrator | 2025-09-19 01:17:18.962119 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-19 01:17:18.962123 | orchestrator | Friday 19 September 2025 01:17:18 +0000 (0:00:00.495) 0:00:04.484 ****** 2025-09-19 01:17:18.962128 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb8ff195dd1b7369147b4520015244d38c79eb4521e55448bbe60d08ef42a912', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 01:17:18.962134 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d47aaac58b94d763e24e4cf5dc53b769321bd11c2fb64d800a31a130e8daeab', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 01:17:18.962138 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'afe6d866a1b8eea5aae6e992b02903532caef46459af68217ccebdb09b336509', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-19 01:17:18.962144 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4beafe58b1b24dd1c4899a1bfedac1bdf130fcbc2c507d3c9933d259f449d8f6', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 01:17:18.962148 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3a36445f9b2243f2258d2a2dd7a2014a4b8be5343503b7f24b59b07707af3ccc', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 01:17:18.962161 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd81537e4c5dc968e4bc6ec910e0223287c1905002d1c327fa334e735d34607b4', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 01:17:18.962175 | orchestrator | skipping: [testbed-node-3] => (item={'id': '510a07343b1e4492c63a2c56a65753b701667260ac32a46867e30fe642542cd8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 01:17:18.962186 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f77d0ea319f1890a68b2e9e043577f85fb5d8cbaf8e3fc5ee0e491529cfc85a9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 01:17:18.962192 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1cd783a584648aa4ee8e9cbb58f8b9b7f5f5262f869f95cc777cc1e77cc8b0c8', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-09-19 01:17:18.962195 | orchestrator | skipping: [testbed-node-3] => (item={'id': '341488a58d9455af8664e71600ace4f3679c1e92f071f7f255f0326030c3fe50', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 01:17:18.962200 | orchestrator | skipping: [testbed-node-3] => (item={'id': '742d7deca1103144a48125b27f56005fd94c03e27f29f5c1bb06e1571e95f21a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-19 01:17:18.962209 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e27d034bd88a5b163bd591e9593690bf9e7f171ae4ec25d598543168023179bb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2025-09-19 01:17:18.962214 | orchestrator | ok: [testbed-node-3] => (item={'id': '33b7d5b71f309a2afca6e0fd2ba835d51150be109de46de1599672d4b53f5254', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-09-19 01:17:18.962219 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ac0b0eb6192a5fa02657f44c5aea7f80eb724d448adf3142eaf577c82f569e7d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-09-19 01:17:18.962223 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24ce406089f720479c7bd7e6c837a274bdec1f39d7aa0bef5b43fd565508e1e2', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 01:17:18.962227 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cdd446510923c7997c504f30c8ee872768744e1cc84451058f0a8c519937844c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-19 01:17:18.962233 | orchestrator | skipping: [testbed-node-3] => (item={'id': '70f68807842c3ce53c1444f5cbcec3c8400a3ec795bae814d713bd3dbaeb3c0e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-09-19 01:17:18.962237 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b715a8afa71ad212a6fcd4c51a31284aeda11723c09b436b8525943e7aa436c1', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:18.962241 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cea20a7bfe2701fffdf70c05b975fbeea8cc4fd42467be488bd051bea18681b4', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:18.962245 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd5cb3d6b51288f9f4d52f5dab0aa2d257d38ba1e669bbd38d2748c3dca2e90dd', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:18.962252 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c94618769117da3a3f22a48ca55b54c4359f1d7c76aa547a0804424cabeb56d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 01:17:19.250569 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5eacfc9ec6c6eee03a20084c1fd41a941fafa841d046bbfd3fc4f6fff85eddaa', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 01:17:19.250739 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1019f39dd4e5c050e2a9974ff315d8de781e03b073204a1a1b13edc240ff041d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-19 01:17:19.250756 | orchestrator | skipping: [testbed-node-4] => (item={'id': '431ff1593c99bc8bfacaa0dbd2a9114e80e22edf9771f8df43b4d187d5505a7d', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 01:17:19.250768 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e0c62bfc10a148cd077eb4c1fc90bea4aa1debcd7cf0272ad764e8f1cc6839f', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 01:17:19.250801 | orchestrator | skipping: [testbed-node-4] => (item={'id': '64006061bca061ada857bacca729f78a60d2ac088482305b5aadae3022da8675', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 01:17:19.250825 | orchestrator | skipping: [testbed-node-4] => (item={'id': '53ca4b7978eac7c0949f8a8bd1008cd1cb2976dcb22f3930151ab8d621a523b4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 01:17:19.250837 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0a8fdb6839110206973f37e81f3c48d07f98f79d93ab61aec1ded2166079c4b7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 01:17:19.250850 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c208d1b8a806dd61a6d8f77bc20edb28a505fe8b4f39c7e57668491debd573b3', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-09-19 01:17:19.250862 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1bd6ab7460e28e807ded64f6770063937c192e0ad35ec1ebf3ceedcb7cd03e1c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 01:17:19.250874 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0e79e2d5e3b7fa2e179558508ac53be063b795517277e6aa7c8ec1757527c84c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-19 01:17:19.250898 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8a7c368fa1cefa5fc0f11f244675ab8fa634ca20bb1fe6b0fe9d918f7139884f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2025-09-19 01:17:19.250912 | orchestrator | ok: [testbed-node-4] => (item={'id': '8c937bda7d9fa5cec8486e4dc2ff65700053d0afbf9a4def30b82ad6adc69bb8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-09-19 01:17:19.250925 | orchestrator | ok: [testbed-node-4] => (item={'id': '8f5f7dc95ca86d74936db560ba36a9f9eb955d85587e90f552f6f741f8592637', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-09-19 01:17:19.250936 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9b4f20af5aa268fe2518a7fbf47bbae80902d5a987e3b693ef44575f027d7d2c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 01:17:19.250962 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18f10a3ef54a808d5ee56c4b230217aff397e3a6894a2a89cc2da2e48d5fdd89', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-19 01:17:19.250975 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd8bcdd34bdd24e6a25667d43cea45967f5ac7dbcf969d36530fb4a7d58366cca', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-09-19 01:17:19.250986 | orchestrator | skipping: [testbed-node-4] => (item={'id': '89f90f975bd3e05f9b55b8061a46f9d464d274cc637ae08a4009b7f22acf5cb9', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:19.251004 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0689165f950ff4f4dc4f3132a99aaffe1fd6cf906f6c6b4a67b4303f67635ec2', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:19.251016 | orchestrator | skipping: [testbed-node-4] => (item={'id': '603f00c47e08395e74e3903a93c672e59e489d5830be52747f5e6a6b1fa18952', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:19.251027 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b070532ddb80a4dfc65e3f3f40c5766596f196006b1c9c3766b4d68115e71ad2', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 01:17:19.251039 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fbdcbd77e5be11b21bfc5aba82a8a50efc67861bc0b02600d0c2b6f5ed7dcf55', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 01:17:19.251049 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aa4dca01a35a30e55464e24ce115f853519cd2060c066b87f870fbca263349c7', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-19 01:17:19.251061 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b851c86309e7b8924b313665737b66153e02dc6c13efcae915cbbd985a32d4eb', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 01:17:19.251072 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e5478c251574575633caee2e9adecb12f4a029e6b6a0dfe615a91a87d7c46eff', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 01:17:19.251082 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2a2f72f1213533b67dbc477629cdd81cf5e3de0f0a532f7041b27322a1ed001c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 01:17:19.251094 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0bd805134f2d7e54602a714c1e006aa10a0efca34247b8dc581f238cf4b70ed2', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 01:17:19.251107 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd52f4140050579d8d654166435a06a8444b25380af34f73bedbaf34ac457562f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 01:17:19.251120 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2bdb9bd8558d4d03a354e4c5f5aaa84a4b9dd2ab4f6f18d34d24fd82af3b41ce', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-09-19 01:17:19.251133 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7eb81f83c8517a61dd3f3d23f7b9913c430f10935aa8ddc49d6ed2e1a220154d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 01:17:19.251153 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8373a5aef3b2176911f675639b138b65920875fa165973b7f068b8eccb29a7ac', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-19 01:17:26.930453 | orchestrator | skipping: [testbed-node-5] => (item={'id': '44b7c5993c582d1b8bd1a806dc7d59009af79981b794d86769b7ad76b870027d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2025-09-19 01:17:26.930585 | orchestrator | ok: [testbed-node-5] => (item={'id': '430c2fa6ffd5e01ed47c42926bb3f2683a606dde2555370f8f9d6452d02ddf3b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-09-19 01:17:26.930645 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fd6af8b00d67c40bebcbdb1ed329f783c26f957f00a622fb8add54e61a78e60c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-09-19 01:17:26.930659 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cbca626071139efdcfb2196f241cb686f35d517273d7c971287c65d95be31ed2', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 01:17:26.930719 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e3d6201a0a6e87c30c9427fcaebd1e93d77c0698b3d8794d216ac69c5b3218a6', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-19 01:17:26.930734 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c14342ebdf436df8fe7fed1f9ebadd9453170fb9dfb6312e9c7b464643653311', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-09-19 01:17:26.930746 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0342d7d46b616712fefe1df3571e6da91418ebbacb793f669399cb5769993081', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:26.930757 | orchestrator | skipping: [testbed-node-5] => (item={'id': '649af3ca4c6ac73eaf884638d67fb4171ead47df8a3f44f1e49846a23c5a2c06', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:26.930768 | orchestrator | skipping: [testbed-node-5] => (item={'id': '68b6b739a733da3111386a860b8dccbce0a0cd612a7a64924dca82fe86340973', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-19 01:17:26.930779 | orchestrator | 2025-09-19 01:17:26.930792 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-19 01:17:26.930804 | orchestrator | Friday 19 September 2025 01:17:19 +0000 (0:00:00.574) 0:00:05.059 ****** 2025-09-19 01:17:26.930815 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.930826 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:26.930837 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:26.930848 | orchestrator | 2025-09-19 01:17:26.930859 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-19 01:17:26.930869 | orchestrator | Friday 19 September 2025 01:17:19 +0000 (0:00:00.307) 0:00:05.366 ****** 2025-09-19 01:17:26.930885 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.930898 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:26.930909 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:26.930920 | orchestrator | 2025-09-19 01:17:26.930931 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-19 01:17:26.930942 | orchestrator | Friday 19 September 2025 01:17:19 +0000 (0:00:00.301) 0:00:05.667 ****** 2025-09-19 01:17:26.930953 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.930964 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:26.930974 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:26.930985 | orchestrator | 2025-09-19 01:17:26.930997 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:17:26.931022 | orchestrator | Friday 19 September 2025 01:17:20 +0000 (0:00:00.535) 0:00:06.202 ****** 2025-09-19 01:17:26.931035 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.931048 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:26.931060 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:26.931071 | orchestrator | 2025-09-19 01:17:26.931082 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-19 01:17:26.931092 | orchestrator | Friday 19 September 2025 01:17:20 +0000 (0:00:00.308) 0:00:06.511 ****** 2025-09-19 01:17:26.931103 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-19 01:17:26.931115 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-19 01:17:26.931126 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931137 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-19 01:17:26.931147 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-19 01:17:26.931175 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:26.931187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-19 01:17:26.931198 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-19 01:17:26.931208 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:26.931219 | orchestrator | 2025-09-19 01:17:26.931230 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-19 01:17:26.931240 | orchestrator | Friday 19 September 2025 01:17:20 +0000 (0:00:00.306) 0:00:06.817 ****** 2025-09-19 01:17:26.931251 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.931262 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:26.931273 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:26.931283 | orchestrator | 2025-09-19 01:17:26.931294 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 01:17:26.931305 | orchestrator | Friday 19 September 2025 01:17:21 +0000 (0:00:00.320) 0:00:07.138 ****** 2025-09-19 01:17:26.931315 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931326 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:26.931337 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:26.931347 | orchestrator | 2025-09-19 01:17:26.931358 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 01:17:26.931368 | orchestrator | Friday 19 September 2025 01:17:21 +0000 (0:00:00.510) 0:00:07.648 ****** 2025-09-19 01:17:26.931379 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931390 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:26.931400 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:26.931411 | orchestrator | 2025-09-19 01:17:26.931421 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-19 01:17:26.931432 | orchestrator | Friday 19 September 2025 01:17:22 +0000 (0:00:00.323) 0:00:07.972 ****** 2025-09-19 01:17:26.931443 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.931454 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:26.931464 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:26.931475 | orchestrator | 2025-09-19 01:17:26.931485 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 01:17:26.931496 | orchestrator | Friday 19 September 2025 01:17:22 +0000 (0:00:00.307) 0:00:08.279 ****** 2025-09-19 01:17:26.931507 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931518 | orchestrator | 2025-09-19 01:17:26.931528 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 01:17:26.931539 | orchestrator | Friday 19 September 2025 01:17:22 +0000 (0:00:00.248) 0:00:08.527 ****** 2025-09-19 01:17:26.931550 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931560 | orchestrator | 2025-09-19 01:17:26.931571 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 01:17:26.931589 | orchestrator | Friday 19 September 2025 01:17:22 +0000 (0:00:00.250) 0:00:08.778 ****** 2025-09-19 01:17:26.931599 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931642 | orchestrator | 2025-09-19 01:17:26.931654 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:17:26.931664 | orchestrator | Friday 19 September 2025 01:17:23 +0000 (0:00:00.229) 0:00:09.008 ****** 2025-09-19 01:17:26.931675 | orchestrator | 2025-09-19 01:17:26.931686 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:17:26.931696 | orchestrator | Friday 19 September 2025 01:17:23 +0000 (0:00:00.089) 0:00:09.098 ****** 2025-09-19 01:17:26.931707 | orchestrator | 2025-09-19 01:17:26.931718 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:17:26.931728 | orchestrator | Friday 19 September 2025 01:17:23 +0000 (0:00:00.064) 0:00:09.162 ****** 2025-09-19 01:17:26.931738 | orchestrator | 2025-09-19 01:17:26.931749 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 01:17:26.931760 | orchestrator | Friday 19 September 2025 01:17:23 +0000 (0:00:00.301) 0:00:09.463 ****** 2025-09-19 01:17:26.931770 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931781 | orchestrator | 2025-09-19 01:17:26.931791 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-19 01:17:26.931802 | orchestrator | Friday 19 September 2025 01:17:23 +0000 (0:00:00.270) 0:00:09.734 ****** 2025-09-19 01:17:26.931826 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:26.931837 | orchestrator | 2025-09-19 01:17:26.931848 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:17:26.931859 | orchestrator | Friday 19 September 2025 01:17:24 +0000 (0:00:00.265) 0:00:09.999 ****** 2025-09-19 01:17:26.931869 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.931880 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:26.931890 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:26.931901 | orchestrator | 2025-09-19 01:17:26.931911 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-19 01:17:26.931922 | orchestrator | Friday 19 September 2025 01:17:24 +0000 (0:00:00.313) 0:00:10.313 ****** 2025-09-19 01:17:26.931933 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.931943 | orchestrator | 2025-09-19 01:17:26.931954 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-19 01:17:26.931964 | orchestrator | Friday 19 September 2025 01:17:24 +0000 (0:00:00.258) 0:00:10.572 ****** 2025-09-19 01:17:26.931975 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 01:17:26.931985 | orchestrator | 2025-09-19 01:17:26.931996 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-19 01:17:26.932007 | orchestrator | Friday 19 September 2025 01:17:26 +0000 (0:00:01.611) 0:00:12.183 ****** 2025-09-19 01:17:26.932018 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.932028 | orchestrator | 2025-09-19 01:17:26.932039 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-19 01:17:26.932050 | orchestrator | Friday 19 September 2025 01:17:26 +0000 (0:00:00.141) 0:00:12.325 ****** 2025-09-19 01:17:26.932061 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:26.932071 | orchestrator | 2025-09-19 01:17:26.932082 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-19 01:17:26.932092 | orchestrator | Friday 19 September 2025 01:17:26 +0000 (0:00:00.302) 0:00:12.628 ****** 2025-09-19 01:17:26.932109 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:40.112912 | orchestrator | 2025-09-19 01:17:40.112994 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-19 01:17:40.113002 | orchestrator | Friday 19 September 2025 01:17:26 +0000 (0:00:00.125) 0:00:12.754 ****** 2025-09-19 01:17:40.113008 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113014 | orchestrator | 2025-09-19 01:17:40.113019 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:17:40.113024 | orchestrator | Friday 19 September 2025 01:17:27 +0000 (0:00:00.147) 0:00:12.902 ****** 2025-09-19 01:17:40.113042 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113047 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113051 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113056 | orchestrator | 2025-09-19 01:17:40.113060 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-19 01:17:40.113065 | orchestrator | Friday 19 September 2025 01:17:27 +0000 (0:00:00.598) 0:00:13.500 ****** 2025-09-19 01:17:40.113070 | orchestrator | changed: [testbed-node-3] 2025-09-19 01:17:40.113075 | orchestrator | changed: [testbed-node-4] 2025-09-19 01:17:40.113080 | orchestrator | changed: [testbed-node-5] 2025-09-19 01:17:40.113084 | orchestrator | 2025-09-19 01:17:40.113089 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-19 01:17:40.113093 | orchestrator | Friday 19 September 2025 01:17:30 +0000 (0:00:02.373) 0:00:15.873 ****** 2025-09-19 01:17:40.113098 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113103 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113107 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113112 | orchestrator | 2025-09-19 01:17:40.113116 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-19 01:17:40.113121 | orchestrator | Friday 19 September 2025 01:17:30 +0000 (0:00:00.316) 0:00:16.190 ****** 2025-09-19 01:17:40.113125 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113130 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113134 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113138 | orchestrator | 2025-09-19 01:17:40.113143 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-19 01:17:40.113148 | orchestrator | Friday 19 September 2025 01:17:30 +0000 (0:00:00.503) 0:00:16.693 ****** 2025-09-19 01:17:40.113152 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:40.113157 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:40.113162 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:40.113166 | orchestrator | 2025-09-19 01:17:40.113171 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-19 01:17:40.113175 | orchestrator | Friday 19 September 2025 01:17:31 +0000 (0:00:00.511) 0:00:17.205 ****** 2025-09-19 01:17:40.113180 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113184 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113189 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113193 | orchestrator | 2025-09-19 01:17:40.113198 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-19 01:17:40.113202 | orchestrator | Friday 19 September 2025 01:17:31 +0000 (0:00:00.321) 0:00:17.527 ****** 2025-09-19 01:17:40.113207 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:40.113211 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:40.113216 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:40.113220 | orchestrator | 2025-09-19 01:17:40.113225 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-19 01:17:40.113229 | orchestrator | Friday 19 September 2025 01:17:31 +0000 (0:00:00.302) 0:00:17.829 ****** 2025-09-19 01:17:40.113234 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:40.113238 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:40.113243 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:40.113247 | orchestrator | 2025-09-19 01:17:40.113252 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 01:17:40.113256 | orchestrator | Friday 19 September 2025 01:17:32 +0000 (0:00:00.307) 0:00:18.137 ****** 2025-09-19 01:17:40.113261 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113265 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113270 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113274 | orchestrator | 2025-09-19 01:17:40.113279 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-19 01:17:40.113283 | orchestrator | Friday 19 September 2025 01:17:33 +0000 (0:00:00.795) 0:00:18.932 ****** 2025-09-19 01:17:40.113288 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113308 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113312 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113317 | orchestrator | 2025-09-19 01:17:40.113322 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-19 01:17:40.113326 | orchestrator | Friday 19 September 2025 01:17:33 +0000 (0:00:00.485) 0:00:19.417 ****** 2025-09-19 01:17:40.113331 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113336 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113340 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113344 | orchestrator | 2025-09-19 01:17:40.113349 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-19 01:17:40.113353 | orchestrator | Friday 19 September 2025 01:17:33 +0000 (0:00:00.305) 0:00:19.723 ****** 2025-09-19 01:17:40.113358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:40.113362 | orchestrator | skipping: [testbed-node-4] 2025-09-19 01:17:40.113367 | orchestrator | skipping: [testbed-node-5] 2025-09-19 01:17:40.113371 | orchestrator | 2025-09-19 01:17:40.113376 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-19 01:17:40.113380 | orchestrator | Friday 19 September 2025 01:17:34 +0000 (0:00:00.333) 0:00:20.057 ****** 2025-09-19 01:17:40.113385 | orchestrator | ok: [testbed-node-3] 2025-09-19 01:17:40.113389 | orchestrator | ok: [testbed-node-4] 2025-09-19 01:17:40.113394 | orchestrator | ok: [testbed-node-5] 2025-09-19 01:17:40.113398 | orchestrator | 2025-09-19 01:17:40.113403 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 01:17:40.113408 | orchestrator | Friday 19 September 2025 01:17:34 +0000 (0:00:00.500) 0:00:20.557 ****** 2025-09-19 01:17:40.113412 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:40.113417 | orchestrator | 2025-09-19 01:17:40.113422 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 01:17:40.113426 | orchestrator | Friday 19 September 2025 01:17:35 +0000 (0:00:00.293) 0:00:20.851 ****** 2025-09-19 01:17:40.113431 | orchestrator | skipping: [testbed-node-3] 2025-09-19 01:17:40.113435 | orchestrator | 2025-09-19 01:17:40.113449 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 01:17:40.113454 | orchestrator | Friday 19 September 2025 01:17:35 +0000 (0:00:00.252) 0:00:21.104 ****** 2025-09-19 01:17:40.113459 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:40.113463 | orchestrator | 2025-09-19 01:17:40.113468 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 01:17:40.113473 | orchestrator | Friday 19 September 2025 01:17:36 +0000 (0:00:01.634) 0:00:22.739 ****** 2025-09-19 01:17:40.113477 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:40.113483 | orchestrator | 2025-09-19 01:17:40.113488 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 01:17:40.113494 | orchestrator | Friday 19 September 2025 01:17:37 +0000 (0:00:00.249) 0:00:22.988 ****** 2025-09-19 01:17:40.113499 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:40.113504 | orchestrator | 2025-09-19 01:17:40.113509 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:17:40.113515 | orchestrator | Friday 19 September 2025 01:17:37 +0000 (0:00:00.331) 0:00:23.319 ****** 2025-09-19 01:17:40.113520 | orchestrator | 2025-09-19 01:17:40.113525 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:17:40.113531 | orchestrator | Friday 19 September 2025 01:17:37 +0000 (0:00:00.079) 0:00:23.399 ****** 2025-09-19 01:17:40.113536 | orchestrator | 2025-09-19 01:17:40.113541 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 01:17:40.113547 | orchestrator | Friday 19 September 2025 01:17:37 +0000 (0:00:00.064) 0:00:23.464 ****** 2025-09-19 01:17:40.113552 | orchestrator | 2025-09-19 01:17:40.113557 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 01:17:40.113562 | orchestrator | Friday 19 September 2025 01:17:37 +0000 (0:00:00.068) 0:00:23.532 ****** 2025-09-19 01:17:40.113571 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 01:17:40.113577 | orchestrator | 2025-09-19 01:17:40.113582 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 01:17:40.113587 | orchestrator | Friday 19 September 2025 01:17:39 +0000 (0:00:01.551) 0:00:25.083 ****** 2025-09-19 01:17:40.113592 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-19 01:17:40.113598 | orchestrator |  "msg": [ 2025-09-19 01:17:40.113603 | orchestrator |  "Validator run completed.", 2025-09-19 01:17:40.113609 | orchestrator |  "You can find the report file here:", 2025-09-19 01:17:40.113614 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-19T01:17:15+00:00-report.json", 2025-09-19 01:17:40.113639 | orchestrator |  "on the following host:", 2025-09-19 01:17:40.113644 | orchestrator |  "testbed-manager" 2025-09-19 01:17:40.113649 | orchestrator |  ] 2025-09-19 01:17:40.113655 | orchestrator | } 2025-09-19 01:17:40.113660 | orchestrator | 2025-09-19 01:17:40.113665 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:17:40.113672 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-19 01:17:40.113678 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 01:17:40.113684 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 01:17:40.113689 | orchestrator | 2025-09-19 01:17:40.113694 | orchestrator | 2025-09-19 01:17:40.113699 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:17:40.113705 | orchestrator | Friday 19 September 2025 01:17:40 +0000 (0:00:00.825) 0:00:25.909 ****** 2025-09-19 01:17:40.113710 | orchestrator | =============================================================================== 2025-09-19 01:17:40.113715 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.37s 2025-09-19 01:17:40.113721 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2025-09-19 01:17:40.113726 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2025-09-19 01:17:40.113731 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2025-09-19 01:17:40.113737 | orchestrator | Create report output directory ------------------------------------------ 0.99s 2025-09-19 01:17:40.113741 | orchestrator | Print report file information ------------------------------------------- 0.83s 2025-09-19 01:17:40.113746 | orchestrator | Prepare test data ------------------------------------------------------- 0.80s 2025-09-19 01:17:40.113750 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-09-19 01:17:40.113755 | orchestrator | Prepare test data ------------------------------------------------------- 0.60s 2025-09-19 01:17:40.113759 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2025-09-19 01:17:40.113764 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.56s 2025-09-19 01:17:40.113769 | orchestrator | Set test result to passed if count matches ------------------------------ 0.54s 2025-09-19 01:17:40.113778 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.51s 2025-09-19 01:17:40.113783 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.51s 2025-09-19 01:17:40.113787 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2025-09-19 01:17:40.113792 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.50s 2025-09-19 01:17:40.113800 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-09-19 01:17:40.414967 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.49s 2025-09-19 01:17:40.415081 | orchestrator | Flush handlers ---------------------------------------------------------- 0.46s 2025-09-19 01:17:40.415093 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.33s 2025-09-19 01:17:40.733786 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-19 01:17:40.744658 | orchestrator | + set -e 2025-09-19 01:17:40.744744 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 01:17:40.744758 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 01:17:40.744770 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 01:17:40.744781 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 01:17:40.744792 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 01:17:40.744803 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 01:17:40.744815 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 01:17:40.744825 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 01:17:40.744836 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 01:17:40.744847 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 01:17:40.744858 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 01:17:40.744869 | orchestrator | ++ export ARA=false 2025-09-19 01:17:40.744879 | orchestrator | ++ ARA=false 2025-09-19 01:17:40.744890 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 01:17:40.744901 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 01:17:40.744911 | orchestrator | ++ export TEMPEST=true 2025-09-19 01:17:40.744922 | orchestrator | ++ TEMPEST=true 2025-09-19 01:17:40.744932 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 01:17:40.745133 | orchestrator | ++ IS_ZUUL=true 2025-09-19 01:17:40.745147 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 01:17:40.745157 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-09-19 01:17:40.745168 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 01:17:40.745179 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 01:17:40.745189 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 01:17:40.745200 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 01:17:40.745210 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 01:17:40.745221 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 01:17:40.745231 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 01:17:40.745242 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 01:17:40.745252 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 01:17:40.745263 | orchestrator | + source /etc/os-release 2025-09-19 01:17:40.745274 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-19 01:17:40.745296 | orchestrator | ++ NAME=Ubuntu 2025-09-19 01:17:40.745307 | orchestrator | ++ VERSION_ID=24.04 2025-09-19 01:17:40.745318 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-19 01:17:40.745329 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-19 01:17:40.745339 | orchestrator | ++ ID=ubuntu 2025-09-19 01:17:40.745350 | orchestrator | ++ ID_LIKE=debian 2025-09-19 01:17:40.745361 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-19 01:17:40.745371 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-19 01:17:40.745382 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-19 01:17:40.745393 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-19 01:17:40.745405 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-19 01:17:40.745416 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-19 01:17:40.745427 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-19 01:17:40.745438 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-19 01:17:40.745450 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 01:17:40.772793 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 01:18:04.691537 | orchestrator | 2025-09-19 01:18:04.691703 | orchestrator | # Status of Elasticsearch 2025-09-19 01:18:04.691723 | orchestrator | 2025-09-19 01:18:04.691736 | orchestrator | + pushd /opt/configuration/contrib 2025-09-19 01:18:04.691749 | orchestrator | + echo 2025-09-19 01:18:04.691760 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-19 01:18:04.691771 | orchestrator | + echo 2025-09-19 01:18:04.691782 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-19 01:18:04.896525 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-19 01:18:04.896700 | orchestrator | 2025-09-19 01:18:04.896719 | orchestrator | # Status of MariaDB 2025-09-19 01:18:04.896731 | orchestrator | 2025-09-19 01:18:04.896742 | orchestrator | + echo 2025-09-19 01:18:04.896753 | orchestrator | + echo '# Status of MariaDB' 2025-09-19 01:18:04.896764 | orchestrator | + echo 2025-09-19 01:18:04.896774 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-19 01:18:04.896786 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-19 01:18:04.974972 | orchestrator | Reading package lists... 2025-09-19 01:18:05.333158 | orchestrator | Building dependency tree... 2025-09-19 01:18:05.333861 | orchestrator | Reading state information... 2025-09-19 01:18:05.738439 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-19 01:18:05.738545 | orchestrator | bc set to manually installed. 2025-09-19 01:18:05.738568 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2025-09-19 01:18:06.369475 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-19 01:18:06.369577 | orchestrator | 2025-09-19 01:18:06.369593 | orchestrator | # Status of Prometheus 2025-09-19 01:18:06.369606 | orchestrator | 2025-09-19 01:18:06.369617 | orchestrator | + echo 2025-09-19 01:18:06.369628 | orchestrator | + echo '# Status of Prometheus' 2025-09-19 01:18:06.369639 | orchestrator | + echo 2025-09-19 01:18:06.369705 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-19 01:18:06.431272 | orchestrator | Unauthorized 2025-09-19 01:18:06.434774 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-19 01:18:06.496726 | orchestrator | Unauthorized 2025-09-19 01:18:06.500367 | orchestrator | 2025-09-19 01:18:06.500399 | orchestrator | # Status of RabbitMQ 2025-09-19 01:18:06.500410 | orchestrator | 2025-09-19 01:18:06.500420 | orchestrator | + echo 2025-09-19 01:18:06.500430 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-19 01:18:06.500440 | orchestrator | + echo 2025-09-19 01:18:06.500450 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-19 01:18:07.016075 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-19 01:18:07.026645 | orchestrator | 2025-09-19 01:18:07.026738 | orchestrator | # Status of Redis 2025-09-19 01:18:07.026751 | orchestrator | 2025-09-19 01:18:07.026763 | orchestrator | + echo 2025-09-19 01:18:07.026775 | orchestrator | + echo '# Status of Redis' 2025-09-19 01:18:07.026787 | orchestrator | + echo 2025-09-19 01:18:07.026799 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-19 01:18:07.035795 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002489s;;;0.000000;10.000000 2025-09-19 01:18:07.035846 | orchestrator | 2025-09-19 01:18:07.035860 | orchestrator | # Create backup of MariaDB database 2025-09-19 01:18:07.035873 | orchestrator | 2025-09-19 01:18:07.035885 | orchestrator | + popd 2025-09-19 01:18:07.035897 | orchestrator | + echo 2025-09-19 01:18:07.035909 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-19 01:18:07.035920 | orchestrator | + echo 2025-09-19 01:18:07.035933 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-19 01:18:09.003346 | orchestrator | 2025-09-19 01:18:08 | INFO  | Task 29292b20-8f91-4534-8a0b-396e54d7b54a (mariadb_backup) was prepared for execution. 2025-09-19 01:18:09.003443 | orchestrator | 2025-09-19 01:18:09 | INFO  | It takes a moment until task 29292b20-8f91-4534-8a0b-396e54d7b54a (mariadb_backup) has been started and output is visible here. 2025-09-19 01:20:51.183708 | orchestrator | 2025-09-19 01:20:51.183859 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 01:20:51.183877 | orchestrator | 2025-09-19 01:20:51.183889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 01:20:51.183901 | orchestrator | Friday 19 September 2025 01:18:12 +0000 (0:00:00.175) 0:00:00.175 ****** 2025-09-19 01:20:51.183912 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:20:51.183924 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:20:51.183935 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:20:51.183946 | orchestrator | 2025-09-19 01:20:51.183957 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 01:20:51.183990 | orchestrator | Friday 19 September 2025 01:18:13 +0000 (0:00:00.311) 0:00:00.487 ****** 2025-09-19 01:20:51.184001 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 01:20:51.184012 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 01:20:51.184023 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 01:20:51.184034 | orchestrator | 2025-09-19 01:20:51.184045 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 01:20:51.184055 | orchestrator | 2025-09-19 01:20:51.184067 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 01:20:51.184078 | orchestrator | Friday 19 September 2025 01:18:13 +0000 (0:00:00.565) 0:00:01.052 ****** 2025-09-19 01:20:51.184206 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 01:20:51.184223 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 01:20:51.184234 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 01:20:51.184245 | orchestrator | 2025-09-19 01:20:51.184258 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 01:20:51.184271 | orchestrator | Friday 19 September 2025 01:18:14 +0000 (0:00:00.432) 0:00:01.485 ****** 2025-09-19 01:20:51.184284 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 01:20:51.184297 | orchestrator | 2025-09-19 01:20:51.184310 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-19 01:20:51.184322 | orchestrator | Friday 19 September 2025 01:18:14 +0000 (0:00:00.571) 0:00:02.057 ****** 2025-09-19 01:20:51.184335 | orchestrator | ok: [testbed-node-1] 2025-09-19 01:20:51.184348 | orchestrator | ok: [testbed-node-0] 2025-09-19 01:20:51.184360 | orchestrator | ok: [testbed-node-2] 2025-09-19 01:20:51.184372 | orchestrator | 2025-09-19 01:20:51.184385 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-19 01:20:51.184398 | orchestrator | Friday 19 September 2025 01:18:17 +0000 (0:00:02.962) 0:00:05.019 ****** 2025-09-19 01:20:51.184410 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:20:51.184423 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:20:51.184435 | orchestrator | 2025-09-19 01:20:51.184448 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-09-19 01:20:51.184460 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 01:20:51.184472 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-19 01:20:51.184485 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 01:20:51.184498 | orchestrator | mariadb_bootstrap_restart 2025-09-19 01:20:51.184510 | orchestrator | changed: [testbed-node-0] 2025-09-19 01:20:51.184523 | orchestrator | 2025-09-19 01:20:51.184535 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 01:20:51.184548 | orchestrator | skipping: no hosts matched 2025-09-19 01:20:51.184560 | orchestrator | 2025-09-19 01:20:51.184573 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 01:20:51.184586 | orchestrator | skipping: no hosts matched 2025-09-19 01:20:51.184599 | orchestrator | 2025-09-19 01:20:51.184627 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 01:20:51.184638 | orchestrator | skipping: no hosts matched 2025-09-19 01:20:51.184649 | orchestrator | 2025-09-19 01:20:51.184660 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 01:20:51.184670 | orchestrator | 2025-09-19 01:20:51.184681 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 01:20:51.184692 | orchestrator | Friday 19 September 2025 01:20:50 +0000 (0:02:32.490) 0:02:37.510 ****** 2025-09-19 01:20:51.184702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:20:51.184713 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:20:51.184724 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:20:51.184734 | orchestrator | 2025-09-19 01:20:51.184755 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 01:20:51.184844 | orchestrator | Friday 19 September 2025 01:20:50 +0000 (0:00:00.304) 0:02:37.815 ****** 2025-09-19 01:20:51.184858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 01:20:51.184869 | orchestrator | skipping: [testbed-node-1] 2025-09-19 01:20:51.184880 | orchestrator | skipping: [testbed-node-2] 2025-09-19 01:20:51.184890 | orchestrator | 2025-09-19 01:20:51.184901 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:20:51.184913 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 01:20:51.184925 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 01:20:51.184936 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 01:20:51.184947 | orchestrator | 2025-09-19 01:20:51.184958 | orchestrator | 2025-09-19 01:20:51.184968 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:20:51.184979 | orchestrator | Friday 19 September 2025 01:20:50 +0000 (0:00:00.217) 0:02:38.032 ****** 2025-09-19 01:20:51.184990 | orchestrator | =============================================================================== 2025-09-19 01:20:51.185020 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 152.49s 2025-09-19 01:20:51.185032 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.96s 2025-09-19 01:20:51.185043 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.57s 2025-09-19 01:20:51.185053 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-09-19 01:20:51.185064 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-09-19 01:20:51.185075 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-19 01:20:51.185086 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-09-19 01:20:51.185097 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2025-09-19 01:20:51.489963 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-19 01:20:51.498432 | orchestrator | + set -e 2025-09-19 01:20:51.498499 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 01:20:51.498520 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 01:20:51.498538 | orchestrator | ++ INTERACTIVE=false 2025-09-19 01:20:51.498555 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 01:20:51.498574 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 01:20:51.498594 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 01:20:51.499980 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 01:20:51.503696 | orchestrator | 2025-09-19 01:20:51.503725 | orchestrator | # OpenStack endpoints 2025-09-19 01:20:51.503737 | orchestrator | 2025-09-19 01:20:51.503748 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 01:20:51.503759 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 01:20:51.503770 | orchestrator | + export OS_CLOUD=admin 2025-09-19 01:20:51.503780 | orchestrator | + OS_CLOUD=admin 2025-09-19 01:20:51.503792 | orchestrator | + echo 2025-09-19 01:20:51.503802 | orchestrator | + echo '# OpenStack endpoints' 2025-09-19 01:20:51.503843 | orchestrator | + echo 2025-09-19 01:20:51.503855 | orchestrator | + openstack endpoint list 2025-09-19 01:20:54.983044 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 01:20:54.983171 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-19 01:20:54.983187 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 01:20:54.983224 | orchestrator | | 0a472bceb90444a2bc2577a741b98a6b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 01:20:54.983236 | orchestrator | | 0c09c07941d645f3a84b4d5e89111c8e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-19 01:20:54.983247 | orchestrator | | 115e912bc886489d96770b941d618dd9 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-19 01:20:54.983257 | orchestrator | | 153849d2b4714cebafc87115e911030d | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-19 01:20:54.983283 | orchestrator | | 1b5c436ede6342c19a488a5aa8772b28 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-19 01:20:54.983294 | orchestrator | | 278d99e51a9f49f6b9cda1c645b08434 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 01:20:54.983305 | orchestrator | | 3bfed5c577a547deb0b4a3dd10dc06fa | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 01:20:54.983315 | orchestrator | | 43ab438e0abf41f0839a609ee90af49b | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-19 01:20:54.983326 | orchestrator | | 460cb8cbb2a74bd6b9fff9af73046d5a | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-19 01:20:54.983336 | orchestrator | | 4616b555748a4bed9056cff6157ef128 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-19 01:20:54.983347 | orchestrator | | 4ca2e55304e64b71a65c138867a656a8 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-19 01:20:54.983358 | orchestrator | | 551ed1c2cf234d68b64841133f248f8c | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-19 01:20:54.983368 | orchestrator | | 6bde5c8a6da64668acdc930c0224a12e | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-19 01:20:54.983379 | orchestrator | | 92b059360df64400a72b324830b2edd7 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 01:20:54.983390 | orchestrator | | 93986768ceeb4ce394585a0f259d6876 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-19 01:20:54.983400 | orchestrator | | 971ae173bc644e879ff66ff0b0e41f37 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-19 01:20:54.983411 | orchestrator | | acaad015320a430e8869503df5fa42a7 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-19 01:20:54.983421 | orchestrator | | cac5f37119fa4f479e4c5fe6180613ee | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-19 01:20:54.983432 | orchestrator | | d5f10b206d754386853d903af84ced7d | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-19 01:20:54.983442 | orchestrator | | e32e3e7b9a10403a8b343b1fc6793756 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-19 01:20:54.983477 | orchestrator | | e503952357fe40e9aa11489c8d43061c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-19 01:20:54.983489 | orchestrator | | f556c60bc79e42a3a45f940d033ca20b | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-19 01:20:54.983500 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 01:20:55.276309 | orchestrator | 2025-09-19 01:20:55.276408 | orchestrator | # Cinder 2025-09-19 01:20:55.276423 | orchestrator | 2025-09-19 01:20:55.276435 | orchestrator | + echo 2025-09-19 01:20:55.276446 | orchestrator | + echo '# Cinder' 2025-09-19 01:20:55.276457 | orchestrator | + echo 2025-09-19 01:20:55.276468 | orchestrator | + openstack volume service list 2025-09-19 01:20:58.000240 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 01:20:58.000336 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 01:20:58.000350 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 01:20:58.000361 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T01:20:55.000000 | 2025-09-19 01:20:58.000370 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T01:20:55.000000 | 2025-09-19 01:20:58.000380 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T01:20:54.000000 | 2025-09-19 01:20:58.000390 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-19T01:20:54.000000 | 2025-09-19 01:20:58.000399 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-19T01:20:54.000000 | 2025-09-19 01:20:58.000409 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-19T01:20:54.000000 | 2025-09-19 01:20:58.000419 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-19T01:20:57.000000 | 2025-09-19 01:20:58.000428 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-19T01:20:57.000000 | 2025-09-19 01:20:58.000438 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-19T01:20:57.000000 | 2025-09-19 01:20:58.000448 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 01:20:58.314520 | orchestrator | 2025-09-19 01:20:58.314638 | orchestrator | # Neutron 2025-09-19 01:20:58.314653 | orchestrator | 2025-09-19 01:20:58.314665 | orchestrator | + echo 2025-09-19 01:20:58.314677 | orchestrator | + echo '# Neutron' 2025-09-19 01:20:58.314689 | orchestrator | + echo 2025-09-19 01:20:58.314700 | orchestrator | + openstack network agent list 2025-09-19 01:21:01.947877 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 01:21:01.947988 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-19 01:21:01.948003 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 01:21:01.948014 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-19 01:21:01.948047 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-19 01:21:01.948058 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-19 01:21:01.948069 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-19 01:21:01.948100 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-19 01:21:01.948112 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-19 01:21:01.948122 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 01:21:01.948133 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 01:21:01.948144 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 01:21:01.948155 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 01:21:02.239419 | orchestrator | + openstack network service provider list 2025-09-19 01:21:04.846923 | orchestrator | +---------------+------+---------+ 2025-09-19 01:21:04.847028 | orchestrator | | Service Type | Name | Default | 2025-09-19 01:21:04.847043 | orchestrator | +---------------+------+---------+ 2025-09-19 01:21:04.847054 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-19 01:21:04.847065 | orchestrator | +---------------+------+---------+ 2025-09-19 01:21:05.143886 | orchestrator | 2025-09-19 01:21:05.143977 | orchestrator | # Nova 2025-09-19 01:21:05.143990 | orchestrator | 2025-09-19 01:21:05.144000 | orchestrator | + echo 2025-09-19 01:21:05.144009 | orchestrator | + echo '# Nova' 2025-09-19 01:21:05.144019 | orchestrator | + echo 2025-09-19 01:21:05.144029 | orchestrator | + openstack compute service list 2025-09-19 01:21:08.436714 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 01:21:08.436794 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 01:21:08.436804 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 01:21:08.436810 | orchestrator | | 546f9d59-6f3a-4e45-b4d7-7cf5f8010d8a | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T01:21:05.000000 | 2025-09-19 01:21:08.436817 | orchestrator | | e291b3d4-c79a-4b6b-9777-4970cac86593 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T01:20:59.000000 | 2025-09-19 01:21:08.436823 | orchestrator | | b524dd9a-67f5-47ba-9be7-57b3f6817cf7 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T01:21:00.000000 | 2025-09-19 01:21:08.436869 | orchestrator | | 286690cb-9af0-4537-8176-4471ac89f6fc | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-19T01:20:59.000000 | 2025-09-19 01:21:08.436875 | orchestrator | | e9b4f974-db93-44a7-9e1a-319586be9e7d | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-19T01:21:02.000000 | 2025-09-19 01:21:08.436899 | orchestrator | | 839af3a2-f99f-4607-9644-423ea9e4fbc7 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-19T01:21:02.000000 | 2025-09-19 01:21:08.436905 | orchestrator | | 7d55ce3a-83a7-43e0-b80a-82435a305ab5 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-19T01:21:03.000000 | 2025-09-19 01:21:08.436912 | orchestrator | | 745b3bcd-98d7-48dc-97ff-d3fa2c2038b2 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-19T01:21:04.000000 | 2025-09-19 01:21:08.436918 | orchestrator | | 19ee4760-29e7-48d3-a6e4-5208ce501df3 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-19T01:21:04.000000 | 2025-09-19 01:21:08.436924 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 01:21:08.723105 | orchestrator | + openstack hypervisor list 2025-09-19 01:21:13.505361 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 01:21:13.505544 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-19 01:21:13.505566 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 01:21:13.505595 | orchestrator | | 31daa32a-da9b-4715-8c45-d233debeddb0 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-19 01:21:13.505645 | orchestrator | | d75966b6-321c-483a-8d37-48409b32e1a0 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-19 01:21:13.505657 | orchestrator | | 258c5447-f19a-4d06-b8d2-61b3f920de2a | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-19 01:21:13.505669 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 01:21:13.830182 | orchestrator | 2025-09-19 01:21:13.830284 | orchestrator | # Run OpenStack test play 2025-09-19 01:21:13.830299 | orchestrator | 2025-09-19 01:21:13.830311 | orchestrator | + echo 2025-09-19 01:21:13.830322 | orchestrator | + echo '# Run OpenStack test play' 2025-09-19 01:21:13.830335 | orchestrator | + echo 2025-09-19 01:21:13.830346 | orchestrator | + osism apply --environment openstack test 2025-09-19 01:21:15.685030 | orchestrator | 2025-09-19 01:21:15 | INFO  | Trying to run play test in environment openstack 2025-09-19 01:21:25.889102 | orchestrator | 2025-09-19 01:21:25 | INFO  | Task e363a5eb-f9c5-4379-aeea-1dc816aa5aa1 (test) was prepared for execution. 2025-09-19 01:21:25.889207 | orchestrator | 2025-09-19 01:21:25 | INFO  | It takes a moment until task e363a5eb-f9c5-4379-aeea-1dc816aa5aa1 (test) has been started and output is visible here. 2025-09-19 01:23:09.930521 | orchestrator | 2025-09-19 01:23:09 | INFO  | Trying to run play test in environment openstack 2025-09-19 01:23:09.930627 | orchestrator | 2025-09-19 01:23:09 | INFO  | Task 992f4995-1809-4d09-bc7c-c4526227246b (test) was prepared for execution. 2025-09-19 01:23:09.930643 | orchestrator | 2025-09-19 01:23:09 | INFO  | It takes a moment until task 992f4995-1809-4d09-bc7c-c4526227246b (test) has been started and output is visible here. 2025-09-19 01:24:09.874132 | orchestrator | 2025-09-19 01:24:09.874256 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 01:24:09.874271 | orchestrator | 2025-09-19 01:24:09.874282 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 01:24:09.874293 | orchestrator | Friday 19 September 2025 01:21:29 +0000 (0:00:00.080) 0:00:00.080 ****** 2025-09-19 01:24:09.874303 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874314 | orchestrator | 2025-09-19 01:24:09.874323 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 01:24:09.874333 | orchestrator | Friday 19 September 2025 01:21:33 +0000 (0:00:03.763) 0:00:03.844 ****** 2025-09-19 01:24:09.874343 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874353 | orchestrator | 2025-09-19 01:24:09.874362 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 01:24:09.874372 | orchestrator | Friday 19 September 2025 01:21:37 +0000 (0:00:04.146) 0:00:07.991 ****** 2025-09-19 01:24:09.874382 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874391 | orchestrator | 2025-09-19 01:24:09.874401 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 01:24:09.874411 | orchestrator | Friday 19 September 2025 01:21:44 +0000 (0:00:06.493) 0:00:14.485 ****** 2025-09-19 01:24:09.874420 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874430 | orchestrator | 2025-09-19 01:24:09.874440 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 01:24:09.874450 | orchestrator | Friday 19 September 2025 01:21:48 +0000 (0:00:04.004) 0:00:18.489 ****** 2025-09-19 01:24:09.874459 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874469 | orchestrator | 2025-09-19 01:24:09.874479 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 01:24:09.874510 | orchestrator | Friday 19 September 2025 01:21:52 +0000 (0:00:04.087) 0:00:22.577 ****** 2025-09-19 01:24:09.874520 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-19 01:24:09.874531 | orchestrator | changed: [localhost] => (item=member) 2025-09-19 01:24:09.874541 | orchestrator | changed: [localhost] => (item=creator) 2025-09-19 01:24:09.874550 | orchestrator | 2025-09-19 01:24:09.874560 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 01:24:09.874570 | orchestrator | Friday 19 September 2025 01:22:04 +0000 (0:00:11.996) 0:00:34.573 ****** 2025-09-19 01:24:09.874579 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874589 | orchestrator | 2025-09-19 01:24:09.874598 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 01:24:09.874608 | orchestrator | Friday 19 September 2025 01:22:08 +0000 (0:00:04.375) 0:00:38.949 ****** 2025-09-19 01:24:09.874618 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874629 | orchestrator | 2025-09-19 01:24:09.874655 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 01:24:09.874667 | orchestrator | Friday 19 September 2025 01:22:14 +0000 (0:00:05.891) 0:00:44.840 ****** 2025-09-19 01:24:09.874678 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874690 | orchestrator | 2025-09-19 01:24:09.874701 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 01:24:09.874712 | orchestrator | Friday 19 September 2025 01:22:19 +0000 (0:00:04.456) 0:00:49.297 ****** 2025-09-19 01:24:09.874724 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874735 | orchestrator | 2025-09-19 01:24:09.874746 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 01:24:09.874758 | orchestrator | Friday 19 September 2025 01:22:23 +0000 (0:00:04.029) 0:00:53.327 ****** 2025-09-19 01:24:09.874769 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874780 | orchestrator | 2025-09-19 01:24:09.874791 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 01:24:09.874802 | orchestrator | Friday 19 September 2025 01:22:27 +0000 (0:00:04.323) 0:00:57.650 ****** 2025-09-19 01:24:09.874814 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874825 | orchestrator | 2025-09-19 01:24:09.874837 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 01:24:09.874848 | orchestrator | Friday 19 September 2025 01:22:31 +0000 (0:00:03.881) 0:01:01.531 ****** 2025-09-19 01:24:09.874860 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.874871 | orchestrator | 2025-09-19 01:24:09.874883 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 01:24:09.874895 | orchestrator | Friday 19 September 2025 01:22:47 +0000 (0:00:16.426) 0:01:17.958 ****** 2025-09-19 01:24:09.874907 | orchestrator | failed: [localhost] (item=test) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:09.874927 | orchestrator | failed: [localhost] (item=test-1) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-1", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:09.874939 | orchestrator | failed: [localhost] (item=test-2) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-2", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:09.874950 | orchestrator | failed: [localhost] (item=test-3) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-3", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:09.874961 | orchestrator | failed: [localhost] (item=test-4) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-4", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:09.874973 | orchestrator | 2025-09-19 01:24:09.874984 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:24:09.875017 | orchestrator | localhost : ok=13  changed=13  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 01:24:09.875028 | orchestrator | 2025-09-19 01:24:09.875038 | orchestrator | 2025-09-19 01:24:09.875047 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:24:09.875057 | orchestrator | Friday 19 September 2025 01:23:09 +0000 (0:00:21.879) 0:01:39.838 ****** 2025-09-19 01:24:09.875066 | orchestrator | =============================================================================== 2025-09-19 01:24:09.875076 | orchestrator | Create test instances -------------------------------------------------- 21.88s 2025-09-19 01:24:09.875085 | orchestrator | Create test network topology ------------------------------------------- 16.43s 2025-09-19 01:24:09.875095 | orchestrator | Add member roles to user test ------------------------------------------ 12.00s 2025-09-19 01:24:09.875104 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.49s 2025-09-19 01:24:09.875114 | orchestrator | Create ssh security group ----------------------------------------------- 5.89s 2025-09-19 01:24:09.875123 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.46s 2025-09-19 01:24:09.875133 | orchestrator | Create test server group ------------------------------------------------ 4.38s 2025-09-19 01:24:09.875142 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.32s 2025-09-19 01:24:09.875151 | orchestrator | Create test-admin user -------------------------------------------------- 4.15s 2025-09-19 01:24:09.875161 | orchestrator | Create test user -------------------------------------------------------- 4.09s 2025-09-19 01:24:09.875188 | orchestrator | Create icmp security group ---------------------------------------------- 4.03s 2025-09-19 01:24:09.875198 | orchestrator | Create test project ----------------------------------------------------- 4.00s 2025-09-19 01:24:09.875208 | orchestrator | Create test keypair ----------------------------------------------------- 3.88s 2025-09-19 01:24:09.875217 | orchestrator | Create test domain ------------------------------------------------------ 3.76s 2025-09-19 01:24:09.875227 | orchestrator | 2025-09-19 01:24:09.875236 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 01:24:09.875246 | orchestrator | 2025-09-19 01:24:09.875256 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 01:24:09.875265 | orchestrator | Friday 19 September 2025 01:23:13 +0000 (0:00:00.080) 0:00:00.080 ****** 2025-09-19 01:24:09.875275 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875285 | orchestrator | 2025-09-19 01:24:09.875299 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 01:24:09.875309 | orchestrator | Friday 19 September 2025 01:23:17 +0000 (0:00:03.789) 0:00:03.870 ****** 2025-09-19 01:24:09.875319 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875329 | orchestrator | 2025-09-19 01:24:09.875338 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 01:24:09.875348 | orchestrator | Friday 19 September 2025 01:23:21 +0000 (0:00:03.656) 0:00:07.526 ****** 2025-09-19 01:24:09.875357 | orchestrator | changed: [localhost] 2025-09-19 01:24:09.875367 | orchestrator | 2025-09-19 01:24:09.875376 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 01:24:09.875386 | orchestrator | Friday 19 September 2025 01:23:27 +0000 (0:00:06.556) 0:00:14.083 ****** 2025-09-19 01:24:09.875399 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875409 | orchestrator | 2025-09-19 01:24:09.875419 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 01:24:09.875428 | orchestrator | Friday 19 September 2025 01:23:31 +0000 (0:00:04.040) 0:00:18.123 ****** 2025-09-19 01:24:09.875438 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875447 | orchestrator | 2025-09-19 01:24:09.875457 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 01:24:09.875466 | orchestrator | Friday 19 September 2025 01:23:35 +0000 (0:00:03.689) 0:00:21.813 ****** 2025-09-19 01:24:09.875482 | orchestrator | ok: [localhost] => (item=load-balancer_member) 2025-09-19 01:24:09.875491 | orchestrator | ok: [localhost] => (item=member) 2025-09-19 01:24:09.875501 | orchestrator | ok: [localhost] => (item=creator) 2025-09-19 01:24:09.875510 | orchestrator | 2025-09-19 01:24:09.875520 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 01:24:09.875530 | orchestrator | Friday 19 September 2025 01:23:46 +0000 (0:00:11.130) 0:00:32.944 ****** 2025-09-19 01:24:09.875539 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875549 | orchestrator | 2025-09-19 01:24:09.875558 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 01:24:09.875568 | orchestrator | Friday 19 September 2025 01:23:50 +0000 (0:00:03.940) 0:00:36.885 ****** 2025-09-19 01:24:09.875577 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875586 | orchestrator | 2025-09-19 01:24:09.875596 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 01:24:09.875605 | orchestrator | Friday 19 September 2025 01:23:54 +0000 (0:00:03.937) 0:00:40.823 ****** 2025-09-19 01:24:09.875615 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875624 | orchestrator | 2025-09-19 01:24:09.875634 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 01:24:09.875643 | orchestrator | Friday 19 September 2025 01:23:58 +0000 (0:00:04.105) 0:00:44.928 ****** 2025-09-19 01:24:09.875653 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875662 | orchestrator | 2025-09-19 01:24:09.875672 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 01:24:09.875681 | orchestrator | Friday 19 September 2025 01:24:02 +0000 (0:00:03.669) 0:00:48.597 ****** 2025-09-19 01:24:09.875691 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875700 | orchestrator | 2025-09-19 01:24:09.875710 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 01:24:09.875719 | orchestrator | Friday 19 September 2025 01:24:06 +0000 (0:00:03.704) 0:00:52.302 ****** 2025-09-19 01:24:09.875729 | orchestrator | ok: [localhost] 2025-09-19 01:24:09.875738 | orchestrator | 2025-09-19 01:24:09.875747 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 01:24:09.875763 | orchestrator | Friday 19 September 2025 01:24:09 +0000 (0:00:03.690) 0:00:55.992 ****** 2025-09-19 01:24:36.759375 | orchestrator | changed: [localhost] 2025-09-19 01:24:36.759501 | orchestrator | 2025-09-19 01:24:36.759519 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 01:24:36.759533 | orchestrator | Friday 19 September 2025 01:24:15 +0000 (0:00:06.050) 0:01:02.043 ****** 2025-09-19 01:24:36.759545 | orchestrator | failed: [localhost] (item=test) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:36.759559 | orchestrator | failed: [localhost] (item=test-1) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-1", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:36.759570 | orchestrator | failed: [localhost] (item=test-2) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-2", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:36.759581 | orchestrator | failed: [localhost] (item=test-3) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-3", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:36.759592 | orchestrator | failed: [localhost] (item=test-4) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-4", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 01:24:36.759603 | orchestrator | 2025-09-19 01:24:36.759614 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 01:24:36.759626 | orchestrator | localhost : ok=13  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 01:24:36.759668 | orchestrator | 2025-09-19 01:24:36.759680 | orchestrator | 2025-09-19 01:24:36.759690 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 01:24:36.759701 | orchestrator | Friday 19 September 2025 01:24:36 +0000 (0:00:20.553) 0:01:22.596 ****** 2025-09-19 01:24:36.759712 | orchestrator | =============================================================================== 2025-09-19 01:24:36.759723 | orchestrator | Create test instances -------------------------------------------------- 20.55s 2025-09-19 01:24:36.759734 | orchestrator | Add member roles to user test ------------------------------------------ 11.13s 2025-09-19 01:24:36.759744 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.56s 2025-09-19 01:24:36.759755 | orchestrator | Create test network topology -------------------------------------------- 6.05s 2025-09-19 01:24:36.759766 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.11s 2025-09-19 01:24:36.759777 | orchestrator | Create test project ----------------------------------------------------- 4.04s 2025-09-19 01:24:36.759787 | orchestrator | Create test server group ------------------------------------------------ 3.94s 2025-09-19 01:24:36.759798 | orchestrator | Create ssh security group ----------------------------------------------- 3.94s 2025-09-19 01:24:36.759809 | orchestrator | Create test domain ------------------------------------------------------ 3.79s 2025-09-19 01:24:36.759821 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.70s 2025-09-19 01:24:36.759831 | orchestrator | Create test keypair ----------------------------------------------------- 3.69s 2025-09-19 01:24:36.759842 | orchestrator | Create test user -------------------------------------------------------- 3.69s 2025-09-19 01:24:36.759853 | orchestrator | Create icmp security group ---------------------------------------------- 3.67s 2025-09-19 01:24:36.759864 | orchestrator | Create test-admin user -------------------------------------------------- 3.66s 2025-09-19 01:24:37.340748 | orchestrator | ERROR 2025-09-19 01:24:37.341231 | orchestrator | { 2025-09-19 01:24:37.341338 | orchestrator | "delta": "0:09:27.401913", 2025-09-19 01:24:37.341408 | orchestrator | "end": "2025-09-19 01:24:37.088018", 2025-09-19 01:24:37.341468 | orchestrator | "msg": "non-zero return code", 2025-09-19 01:24:37.341524 | orchestrator | "rc": 2, 2025-09-19 01:24:37.341578 | orchestrator | "start": "2025-09-19 01:15:09.686105" 2025-09-19 01:24:37.341629 | orchestrator | } failure 2025-09-19 01:24:37.379002 | 2025-09-19 01:24:37.379172 | PLAY RECAP 2025-09-19 01:24:37.379283 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-19 01:24:37.379335 | 2025-09-19 01:24:37.600779 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-19 01:24:37.602176 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 01:24:38.341957 | 2025-09-19 01:24:38.342122 | PLAY [Post output play] 2025-09-19 01:24:38.358253 | 2025-09-19 01:24:38.358389 | LOOP [stage-output : Register sources] 2025-09-19 01:24:38.428546 | 2025-09-19 01:24:38.428945 | TASK [stage-output : Check sudo] 2025-09-19 01:24:39.340542 | orchestrator | sudo: a password is required 2025-09-19 01:24:39.467486 | orchestrator | ok: Runtime: 0:00:00.016242 2025-09-19 01:24:39.481736 | 2025-09-19 01:24:39.481953 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-19 01:24:39.520925 | 2025-09-19 01:24:39.521205 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-19 01:24:39.589123 | orchestrator | ok 2025-09-19 01:24:39.598380 | 2025-09-19 01:24:39.598516 | LOOP [stage-output : Ensure target folders exist] 2025-09-19 01:24:40.060474 | orchestrator | ok: "docs" 2025-09-19 01:24:40.060815 | 2025-09-19 01:24:40.312172 | orchestrator | ok: "artifacts" 2025-09-19 01:24:40.550542 | orchestrator | ok: "logs" 2025-09-19 01:24:40.574463 | 2025-09-19 01:24:40.574665 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-19 01:24:40.616728 | 2025-09-19 01:24:40.617048 | TASK [stage-output : Make all log files readable] 2025-09-19 01:24:40.915415 | orchestrator | ok 2025-09-19 01:24:40.926207 | 2025-09-19 01:24:40.926381 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-19 01:24:40.961796 | orchestrator | skipping: Conditional result was False 2025-09-19 01:24:40.969715 | 2025-09-19 01:24:40.969819 | TASK [stage-output : Discover log files for compression] 2025-09-19 01:24:40.993039 | orchestrator | skipping: Conditional result was False 2025-09-19 01:24:41.002381 | 2025-09-19 01:24:41.002535 | LOOP [stage-output : Archive everything from logs] 2025-09-19 01:24:41.047443 | 2025-09-19 01:24:41.047632 | PLAY [Post cleanup play] 2025-09-19 01:24:41.056911 | 2025-09-19 01:24:41.057017 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 01:24:41.116656 | orchestrator | ok 2025-09-19 01:24:41.124820 | 2025-09-19 01:24:41.124975 | TASK [Set cloud fact (local deployment)] 2025-09-19 01:24:41.148353 | orchestrator | skipping: Conditional result was False 2025-09-19 01:24:41.155703 | 2025-09-19 01:24:41.155806 | TASK [Clean the cloud environment] 2025-09-19 01:24:41.737023 | orchestrator | 2025-09-19 01:24:41 - clean up servers 2025-09-19 01:24:42.484791 | orchestrator | 2025-09-19 01:24:42 - testbed-manager 2025-09-19 01:24:42.567172 | orchestrator | 2025-09-19 01:24:42 - testbed-node-5 2025-09-19 01:24:42.649650 | orchestrator | 2025-09-19 01:24:42 - testbed-node-3 2025-09-19 01:24:42.734001 | orchestrator | 2025-09-19 01:24:42 - testbed-node-1 2025-09-19 01:24:42.823407 | orchestrator | 2025-09-19 01:24:42 - testbed-node-0 2025-09-19 01:24:42.911780 | orchestrator | 2025-09-19 01:24:42 - testbed-node-2 2025-09-19 01:24:42.995751 | orchestrator | 2025-09-19 01:24:42 - testbed-node-4 2025-09-19 01:24:43.081082 | orchestrator | 2025-09-19 01:24:43 - clean up keypairs 2025-09-19 01:24:43.102280 | orchestrator | 2025-09-19 01:24:43 - testbed 2025-09-19 01:24:43.131474 | orchestrator | 2025-09-19 01:24:43 - wait for servers to be gone 2025-09-19 01:24:54.061455 | orchestrator | 2025-09-19 01:24:54 - clean up ports 2025-09-19 01:24:54.235847 | orchestrator | 2025-09-19 01:24:54 - 20cf926e-fe55-4166-9ec3-5ba6d6ac75f8 2025-09-19 01:24:54.529942 | orchestrator | 2025-09-19 01:24:54 - 5ab2208b-cb81-4e83-8d1e-6b4b60556b0a 2025-09-19 01:24:54.783366 | orchestrator | 2025-09-19 01:24:54 - 646975c1-6acf-49ac-b1c1-c31995c1a6cd 2025-09-19 01:24:55.006063 | orchestrator | 2025-09-19 01:24:55 - 7dd2c10b-00e9-4e0e-8082-e800cf48e6ab 2025-09-19 01:24:55.208161 | orchestrator | 2025-09-19 01:24:55 - 7ea935c7-a173-4a21-bb48-7ba6761c6e4f 2025-09-19 01:24:55.410512 | orchestrator | 2025-09-19 01:24:55 - eac84548-1c38-4caa-b8f5-0764f834ca6a 2025-09-19 01:24:55.830764 | orchestrator | 2025-09-19 01:24:55 - f5837a12-e1d2-41f4-9345-decbe46102e6 2025-09-19 01:24:56.084984 | orchestrator | 2025-09-19 01:24:56 - clean up volumes 2025-09-19 01:24:56.200025 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-manager-base 2025-09-19 01:24:56.240008 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-3-node-base 2025-09-19 01:24:56.290471 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-1-node-base 2025-09-19 01:24:56.347854 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-0-node-base 2025-09-19 01:24:56.391182 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-4-node-base 2025-09-19 01:24:56.436487 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-2-node-base 2025-09-19 01:24:56.481741 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-5-node-base 2025-09-19 01:24:56.525668 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-1-node-4 2025-09-19 01:24:56.567211 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-6-node-3 2025-09-19 01:24:56.610328 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-7-node-4 2025-09-19 01:24:56.655152 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-4-node-4 2025-09-19 01:24:56.697496 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-3-node-3 2025-09-19 01:24:56.737986 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-2-node-5 2025-09-19 01:24:56.779893 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-0-node-3 2025-09-19 01:24:56.822499 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-5-node-5 2025-09-19 01:24:56.864775 | orchestrator | 2025-09-19 01:24:56 - testbed-volume-8-node-5 2025-09-19 01:24:56.904704 | orchestrator | 2025-09-19 01:24:56 - disconnect routers 2025-09-19 01:24:57.009138 | orchestrator | 2025-09-19 01:24:57 - testbed 2025-09-19 01:24:57.819414 | orchestrator | 2025-09-19 01:24:57 - clean up subnets 2025-09-19 01:24:57.871893 | orchestrator | 2025-09-19 01:24:57 - subnet-testbed-management 2025-09-19 01:24:58.493485 | orchestrator | 2025-09-19 01:24:58 - clean up networks 2025-09-19 01:24:58.627695 | orchestrator | 2025-09-19 01:24:58 - net-testbed-management 2025-09-19 01:24:58.896849 | orchestrator | 2025-09-19 01:24:58 - clean up security groups 2025-09-19 01:24:58.938489 | orchestrator | 2025-09-19 01:24:58 - testbed-node 2025-09-19 01:24:59.045270 | orchestrator | 2025-09-19 01:24:59 - testbed-management 2025-09-19 01:24:59.167337 | orchestrator | 2025-09-19 01:24:59 - clean up floating ips 2025-09-19 01:24:59.202127 | orchestrator | 2025-09-19 01:24:59 - 81.163.192.51 2025-09-19 01:24:59.585056 | orchestrator | 2025-09-19 01:24:59 - clean up routers 2025-09-19 01:24:59.695468 | orchestrator | 2025-09-19 01:24:59 - testbed 2025-09-19 01:25:01.208824 | orchestrator | ok: Runtime: 0:00:19.471435 2025-09-19 01:25:01.213605 | 2025-09-19 01:25:01.213805 | PLAY RECAP 2025-09-19 01:25:01.213981 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-19 01:25:01.214058 | 2025-09-19 01:25:01.344933 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 01:25:01.347536 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 01:25:02.106219 | 2025-09-19 01:25:02.106385 | PLAY [Cleanup play] 2025-09-19 01:25:02.123095 | 2025-09-19 01:25:02.123236 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 01:25:02.191719 | orchestrator | ok 2025-09-19 01:25:02.202359 | 2025-09-19 01:25:02.202597 | TASK [Set cloud fact (local deployment)] 2025-09-19 01:25:02.237743 | orchestrator | skipping: Conditional result was False 2025-09-19 01:25:02.256005 | 2025-09-19 01:25:02.256155 | TASK [Clean the cloud environment] 2025-09-19 01:25:03.384174 | orchestrator | 2025-09-19 01:25:03 - clean up servers 2025-09-19 01:25:03.859144 | orchestrator | 2025-09-19 01:25:03 - clean up keypairs 2025-09-19 01:25:03.876456 | orchestrator | 2025-09-19 01:25:03 - wait for servers to be gone 2025-09-19 01:25:03.922055 | orchestrator | 2025-09-19 01:25:03 - clean up ports 2025-09-19 01:25:03.999699 | orchestrator | 2025-09-19 01:25:03 - clean up volumes 2025-09-19 01:25:04.062393 | orchestrator | 2025-09-19 01:25:04 - disconnect routers 2025-09-19 01:25:04.090009 | orchestrator | 2025-09-19 01:25:04 - clean up subnets 2025-09-19 01:25:04.110536 | orchestrator | 2025-09-19 01:25:04 - clean up networks 2025-09-19 01:25:04.262648 | orchestrator | 2025-09-19 01:25:04 - clean up security groups 2025-09-19 01:25:04.376834 | orchestrator | 2025-09-19 01:25:04 - clean up floating ips 2025-09-19 01:25:04.400905 | orchestrator | 2025-09-19 01:25:04 - clean up routers 2025-09-19 01:25:04.795473 | orchestrator | ok: Runtime: 0:00:01.406382 2025-09-19 01:25:04.799954 | 2025-09-19 01:25:04.800163 | PLAY RECAP 2025-09-19 01:25:04.800334 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 01:25:04.800433 | 2025-09-19 01:25:04.927090 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 01:25:04.929532 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 01:25:05.665660 | 2025-09-19 01:25:05.665817 | PLAY [Base post-fetch] 2025-09-19 01:25:05.681377 | 2025-09-19 01:25:05.681502 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-19 01:25:05.737005 | orchestrator | skipping: Conditional result was False 2025-09-19 01:25:05.749138 | 2025-09-19 01:25:05.749314 | TASK [fetch-output : Set log path for single node] 2025-09-19 01:25:05.783564 | orchestrator | ok 2025-09-19 01:25:05.791022 | 2025-09-19 01:25:05.791140 | LOOP [fetch-output : Ensure local output dirs] 2025-09-19 01:25:06.267496 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/work/logs" 2025-09-19 01:25:06.541818 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/work/artifacts" 2025-09-19 01:25:06.808278 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/55676f51bab14a6e86aaaf487e9417c0/work/docs" 2025-09-19 01:25:06.839446 | 2025-09-19 01:25:06.839624 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-19 01:25:07.793570 | orchestrator | changed: .d..t...... ./ 2025-09-19 01:25:07.793894 | orchestrator | changed: All items complete 2025-09-19 01:25:07.793951 | 2025-09-19 01:25:08.531474 | orchestrator | changed: .d..t...... ./ 2025-09-19 01:25:09.283021 | orchestrator | changed: .d..t...... ./ 2025-09-19 01:25:09.310084 | 2025-09-19 01:25:09.310221 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-19 01:25:09.343655 | orchestrator | skipping: Conditional result was False 2025-09-19 01:25:09.346117 | orchestrator | skipping: Conditional result was False 2025-09-19 01:25:09.363319 | 2025-09-19 01:25:09.363434 | PLAY RECAP 2025-09-19 01:25:09.363514 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-19 01:25:09.363559 | 2025-09-19 01:25:09.485238 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 01:25:09.488264 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 01:25:10.227582 | 2025-09-19 01:25:10.227747 | PLAY [Base post] 2025-09-19 01:25:10.242203 | 2025-09-19 01:25:10.242336 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-19 01:25:11.206990 | orchestrator | changed 2025-09-19 01:25:11.217381 | 2025-09-19 01:25:11.217516 | PLAY RECAP 2025-09-19 01:25:11.217592 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-19 01:25:11.217667 | 2025-09-19 01:25:11.342377 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 01:25:11.343438 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-19 01:25:12.149546 | 2025-09-19 01:25:12.149738 | PLAY [Base post-logs] 2025-09-19 01:25:12.161019 | 2025-09-19 01:25:12.161165 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-19 01:25:12.626913 | localhost | changed 2025-09-19 01:25:12.642374 | 2025-09-19 01:25:12.642539 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-19 01:25:12.670955 | localhost | ok 2025-09-19 01:25:12.678106 | 2025-09-19 01:25:12.678285 | TASK [Set zuul-log-path fact] 2025-09-19 01:25:12.695949 | localhost | ok 2025-09-19 01:25:12.708765 | 2025-09-19 01:25:12.708913 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 01:25:12.746468 | localhost | ok 2025-09-19 01:25:12.752944 | 2025-09-19 01:25:12.753117 | TASK [upload-logs : Create log directories] 2025-09-19 01:25:13.259931 | localhost | changed 2025-09-19 01:25:13.262826 | 2025-09-19 01:25:13.263258 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-19 01:25:13.744572 | localhost -> localhost | ok: Runtime: 0:00:00.007612 2025-09-19 01:25:13.748798 | 2025-09-19 01:25:13.748942 | TASK [upload-logs : Upload logs to log server] 2025-09-19 01:25:14.293677 | localhost | Output suppressed because no_log was given 2025-09-19 01:25:14.295561 | 2025-09-19 01:25:14.295667 | LOOP [upload-logs : Compress console log and json output] 2025-09-19 01:25:14.354765 | localhost | skipping: Conditional result was False 2025-09-19 01:25:14.359691 | localhost | skipping: Conditional result was False 2025-09-19 01:25:14.367129 | 2025-09-19 01:25:14.367359 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-19 01:25:14.411749 | localhost | skipping: Conditional result was False 2025-09-19 01:25:14.412115 | 2025-09-19 01:25:14.416510 | localhost | skipping: Conditional result was False 2025-09-19 01:25:14.426719 | 2025-09-19 01:25:14.426909 | LOOP [upload-logs : Upload console log and json output]